[with thanks to Tom Murcko]4. Objections.
"If (1) HI is correct, And if (2) HI should apply to all sentient beings, not just those on earth, Then (3) We have a moral obligation to spread throughout the universe as quickly as is practical, eliminating aversive experience and maximizing pleasure gradients everywhere. No 32
Furthermore, if also (4) There are a very large number (let's say at least millions) of intelligent life forms elsewhere in the universe, Then (5) It's a virtual certainty that at least some of them (and more likely, most of them) are substantially more intelligent than us, And (6) It's a virtual certainty that at least some of them are at least equally driven to their goals, at least some subset of which are likely to apply to the entire universe.
We can subdivide the life forms mentioned in (6) into three categories: Category A consists of those life forms which have the same goals and choose the same means as HI. This sounds unlikely but might not be. Consider: If (7) morality is absolute rather than relative (i.e. there is some correct way to behave), and if (8) morality has attractors (i.e. most or all sufficiently intelligent life forms will discover the right way to behave and at least some of them will choose to behave that way), and if (1) then (9) at least some other life forms will find HI persuasive and will work toward it.
If (9) and (4), and if (10) the most advanced life forms are best equipped to determine and then carry out HI to maximize the chances of success, then (11) it's probably the case that there is no need for humans to get involved in HI. This logic isn't airtight, however. For example, if (12) all life forms reason this way, then none would act, assuming that some other life form would take care of HI (unless one or more life forms thought or knew that they were the most advanced). In addition, it might be the case that (13) the best implementation approach involves several life forms, not just the most advanced one (perhaps to accomplish the goals of HI more quickly). Nevertheless, it seems fairly clear that if (9) and (4), then it's highly unlikely that humanity is in the best position to implement universe-wide HI.
Category B consists of those life forms which have the same goals but choose different means than us. Some of the points in Category A would apply, but an additional conclusion given (5) seems to be that we should trust their judgement. This appears to be true even those life forms felt that the best approach included elimination of earthly life (and other similar life forms elsewhere).
Category C consists of those life forms which have different goals. If (6), then I believe that it is a virtual certainty that Category C is not empty; i.e., at least some life forms will have different goals than HI. If this is the case, and if (5), then it doesn't seem to matter much what we do, as the outcome will almost certainly be the goal of whichever life form is most advanced. This doesn't imply that (14) working toward earth-level HI goals is entirely pointless, but it does seem to substantially restrict the value of such efforts, making them local and temporary."
Most people believe that the complete abolition of suffering in Homo sapiens is impossible. Extending the circle of compassion to other animals via ecosystem redesign and genetic engineering seems even more far-fetched. So the prospect of some kind of cosmic rescue mission to promote paradise engineering throughout the universe has a distinct air of science fiction. This may of course be the case. The timescales are certainly daunting even for a single galaxy of 400 billion stars some 100,000 light years across - on the order of millions or perhaps tens of millions of years. The level of intellectual, political and sociological cohesion over time required to mount such a project eclipses anything human society could organise today. Moreover recent evidence from distant type Ia supernovae suggests that the expansion of the universe isn't slowing as hitherto supposed, but accelerating owing to poorly understood "dark energy". In consequence, perhaps only our local galactic supercluster will ever be accessible to our descendants.
Viewed purely as a technical challenge, however, the use of self-reproducing, autonomous robots - "von Neumann probes" - to explore and/or colonize our galaxy is both feasible and well-researched. The difference is that their purpose hasn't normally been conceived as a mercy mission for pain-ridden ecosystems that may have evolved elsewhere. [Ironically, notional "berserker probes" that sterilise all life have been discussed in science fiction, albeit not with a negative utilitarian ethic in mind.] Plausibility aside, it is ethically obligatory for utilitarians anywhere to maximise the well-being of all accessible sentience if it's technically feasible to do so - in the absence of any countervailing argument like the Objection above. Less clearly, an obligation to promote the substrates of well-being throughout the cosmos is arguably a disguised implication of various ethical systems that deplore merely "unnecessary" suffering. What "necessary suffering" might mean here is critical but ambiguous.
The most problematic premise in the Objection is perhaps number 4, i.e. the hypothetical existence of millions of other intelligent lifeforms. This assumption relies on the Drake equation1 or one of its variants in estimating the number of extraterrestrial civilizations with which we might come in contact. Any such assumption must overcome the Fermi paradox: "Where are they?” No discernible sign of extraterrestrial life exists - whether its artefacts, physical presence or signals. There may indeed be an indefinitely large number of technologically advanced civilisations in the Multiverse as a whole, or in other domains, or in other branes on "braneworld" scenarios, or even in our domain outside the "Hubble Bubble" [according to the chaotic inflationary universe scenario pioneered by physicist Andre Linde, quantum fluctuations divide the inflationary universe into a vast multitude of exponentially large domains or "mini-universes" where the laws of low-energy physics may be different]. Counterintuitively, as Max Tegmark points out, one popular cosmological model apparently predicts that each of us has an effectively identical twin in a galaxy typically around 101028 metres away. These distance scales are quite dizzying.
The point in this context is that even if we are unique to the known universe, we need not be "special" - which would entail a rejection of the normal Copernican assumption. If inaccessible civilisations do exist beyond our cosmic event horizon, then their superintelligent inhabitants may well have transcended their evolutionary origins just as we are poised to do too. If such superbeings are benevolent, then they will presumably [given "moral attractors"] rescue others physically accessible to being saved within their light-cone ("Category A"). It would be nice to think that cross-species deliverance from suffering was a universal law; the Objection raises the disturbing possibility ("Category C") that it isn't. The existence of hypothetical advanced lifeforms with the same goals as us but who choose different means ("Category B") might indeed shift the onus of responsibility away from the junior civilization. Yet how common is the multiple independent origin of technologically advanced civilizations within a cosmically narrow (space)time-frame?
This is all extremely speculative. Extensive scanning of the electromagnetic spectrum discloses no evidence that technologically sophisticated life exists in our galaxy, or anywhere else in the observable universe. This absence of evidence extends to what Russian astrophysicist Nikolai Kardashev described as "Type III civilizations" - supercivilizations that would employ the energy resources of an entire galaxy. Their electromagnetic signature could in principle be detected by SETI (Search for ExtraTerrestrial Intelligence) researchers as well. Nothing has been found. The search continues.
Many explanations of "The Great Silence" have been mooted. Why assume, for instance, that intelligent extraterrestrials will manifest anything resembling the motives, values, conceptual framework or colonial expansionism of contemporary Homo sapiens? Is our conception of intelligent life and its signature too impoverished for us to have even located the relevant search-space to investigate? But (very) tentatively, the conservative explanation of why an immense ecological niche remains unfilled is that the silence is just what it seems. No technologically advanced, spacefaring civilisations exist within our few billion odd light years neighbourhood. It's up to us.
This conclusion doesn't mean we are locally alone. The Objection is right to take the status of sentient beings in other worlds extremely seriously. If we could really be confident that Earth-based organisms were the only lifeforms in the accessible universe, or if only minimally sentient microbial life exists in other worlds, then eliminating suffering on our planet would effectively discharge our ethical responsibilities. Once our world was cruelty-free, we could retreat into our own private nirvanas - or perhaps build heaven-on-earth and terraform it beyond. Yet it's also possible that complex life and suffering - perhaps intense suffering - exists in alien ecosystems within our cosmic event horizon; and such lifeforms are impotent to do anything about their plight i.e. they are as helpless as are all but one species on contemporary Earth. The presence of such malaise-ridden lifeforms would be undetectable to us with current technology. We have no empirical evidence of their existence one way or the other.
So how likely is such a scenario on theoretical grounds? Life's origins apparently lie early in Earth's 4.6 billion-year history. Deceptively perhaps, its rapid emergence suggests that the process may be relatively "easy" - and thus spontaneously repeated on a massive scale on Earth-like planets across the cosmos. Yet we still can't explain how the primeval "RNA world" preceding our DNA regime came into being. Nor can we yet synthesise life in vitro, or computationally simulate its genesis on Earth. So it's quite possible that only a freakish chain of circumstances allowed life to get started in the first instance. Piling improbable event on improbable event, another chain of contingent circumstances over several billion years allowed multicellular eukaryotic life to evolve. Eventually, life arose with the capacity to rewrite its own source code. It's unknown how many significantly different developmental pathways exist leading to organisms capable of scientific technology, or where the biggest evolutionary bottlenecks lie.
There is another imponderable here too. How likely is it that any primordial alien life will undergo suffering, or even be sentient, if its substrate differs from our familiar organic wetware? We know that our silicon (etc.) robots can be programmed to exhibit the quasi-functional analogues of "mental" and "physical" pain and pleasure, and display a repertoire of "emotional" behaviour without any relevant "raw feels". Will putative extraterrestrials likewise be akin to zombie automata - "intelligent" or otherwise? [If so, would their fate matter?] Or more plausibly, will extraterrestrial life be sentient like us (or perhaps hypersentient)?
Here at least we can rationally speculate: the answer is probably the latter, though these modes of sentience may be very different. For there are powerful reasons for thinking that all primordial information-bearing self-replicators must be carbon-based owing to the functionally unique valence properties of the carbon atom. Likewise, primordial life-supporting chemistries probably require liquid water. [If and when organic life becomes technologically advanced enough to build silicon robots, create "post-biological" digital life, design self-replicating nanobots, run "simulations" in quantum computers, etc., all bets are off.] If such primordial organic life ever reaches a multicellular stage, then the binary coding system of a pleasure-pain axis embedded in a nervous system is an informationally efficient solution to the challenges of the inner and outer environment, albeit brutishly cruel. So if hypothetical early alien life stumbled upon the molecular mechanisms underlying the pleasure-pain axis, then the information-processing role of its gradients will plausibly have been harnessed by natural selection to boost the inclusive fitness of self-propelled organisms - as it has on Earth. No "programmer" or designer is needed. Moreover, given the comparatively narrow range of habitats in the physical universe that could sustain primordial multicellular life, the phenomenon of convergent evolution may mean that all such life, wherever it evolves, isn't going to be quite so exotic as astrobiologists sometimes suppose. [By contrast, advanced life and consciousness could be unimaginably exotic.] If so, then the same abolitionist blueprint for ecosystem redesign and genomic rewrites should be applicable to other planetary biospheres - if we decide to intervene in Darwinian worlds rather than retain their ecological status quo.
That's a lot of ifs. Right now, it's difficult to care deeply about the plight of creatures who may not even exist, or who may be accessible only to our distant post-human descendants. Ecological charity, one feels, begins at home. Yet such indifference may be a reflection of our limited psychology, not a moral argument for inertia. Naturally, we may all be mistaken in ways that exceed our conceptual resources to imagine or describe. Alternatively, something on the lines of the Objection may be correct. Certainly we rarely, if ever, understand the full ramifications of what we are doing. It's hard enough to plan ahead for the next five years, let alone envisage interstellar travel for the next five million. [This is one good reason not to get trapped in a rut of wirehead hedonism or its chemical counterparts rather than strive for superintelligent well-being.] Yet to opt for a deliberate policy of non-interference - whether in the lives of our suffering fellow humans, non-human animals, or primordial extraterrestrials - is no less morally fraught than paternalistic intervention. The argument that we should do nothing until we fully understand its implications cuts little ice in an emergency - and the horrors of a living world where babies get eaten alive by predators, creatures die of hunger, thirst, and cold, etc, must count as morally urgent on all but the most Disneyfied conception of Mother Nature. Analogously, it would be morally reckless for us to shun the use of, say, anaesthetics, pain-killers, veterinary interventions and similar "unnatural" novelties on the grounds that their use poses unknown risks - even though these risks surely exist and should be researched with all possible scientific rigour.
There are indeed ethical pitfalls in "playing God". These pitfalls would be even greater if [as the Objection assumes] there exist god-like extraterrestrial lifeforms better equipped than us to do so. Yet on both a domestic and cosmological scale, moral hazards exist for absentee landlords as well as for hands-on managers. Inaction can be culpable too. Here on Earth, there might seem a moral imperative to intervene and rescue, say, a drowning toddler on (almost) any ethical system at all. But what if that child grows up to be Hitler's grandfather (etc)? We can't know this, since we don't yet carry pocket felicific calculators. Yet the risk is presumably worth taking: we don't let the child drown. Likewise, if your hand is in the fire, you withdraw it. If you are benevolent, then you do the same to rescue a small child or animal companion who is suffering similar agony - whether you are formally a utilitarian ethical theorist or not. The moral sceptic might argue that all value judgements are truth-valueless; but (s)he can't argue consistently that we ought to believe this - or behave in one way rather than another. Taking the abolitionist project to the rest of the galaxy and beyond sounds crazy today; but it's the application of technology to a very homely moral precept writ large, not the outgrowth of a revolutionary new ethical theory. So long as sentient beings suffer extraordinary unpleasantness - whether on Earth or perhaps elsewhere - there is a presumptive case to eradicate such suffering wherever it is found.
E-mail Dave : dave@hedweb.com