The Dystopian Objection to Cryonics
Author’s Note: this post based on an old Discord rant
I have previously discussed some of my problems with rationalist arguments for immortality, but at the end of the day, much of my thesis was just “there is a compromise position that seems uncontroversial on all sides, and the parts that remain controversial should be, so please calm down”. Recently I realized that there is another, related topic, where my contribution is perhaps more of a real difference from past work I have seen, that being cryonics.
For those unfamiliar with or skeptical of cryonics, this is one of the strongest introductions to the current practices, and defenses thereof. To briefly summarize it, cryonicists think that future medicine may have the ability to resurrect people we would now consider dead 1. That essentially what is needed for this to be possible in theory is sufficient information about your brain structure (Urban also concedes that even in theory, any information in the form of continuous electrical activity, such as short-term memory, would not be preserved). Therefore, it is reasonable to use cooling and vitrification to preserve as many details about your brain as possible. Even if it is a long shot that it will work, most people don’t want to die, and this could be compared to a long-shot experimental drug someone might pay large amounts of money to be trialed in on their death bed.
A few of the arguments against this seem poor to me (some seem to just misunderstand it as basically Futurama, where you get stuck into a freezer and then get defrosted in a thousand years as though no time had passed. This, of course, simply wouldn’t work). Others seem hokey, but not factually incorrect, such as the previously mentioned argument that death gives life meaning. Yet others seem more reasonable to me, such as the argument that the resources used for cryonics could be better spent elsewhere, but are vulnerable to arguments of double standard, since people are generally not as bothered about other expenses that have more frivolous motives, or comparable cases of people going to great lengths and resorting to experimental treatments to avoid death for them and their loved ones.
My own impression of cryonics, similar to mine of immortalism, is that many of the critics are ultimately appealing to the weirdness of futurism in this area, that god dammit something must be wrong here, but that, while the futurists are being appropriately weird in my own eyes, they are being insufficiently dystopian in their thinking. It is my general impression that futurists (as opposed to science fiction writers, who have plausible deniability about how seriously they take their writing as speculation) are too utopian, and not dystopian enough in their thinking. I have written at some length about the general arguments for this for my MA thesis, but for my own purposes, Brian Tomasik has already covered many of the relevant points I might highlight.
As I have said, an important part of the cryonics argument is that it might not work (I think probably won’t), but you don’t have much to lose in this case. The money you invested, but not much else. I think this is too optimistic, and in particular, there may be some very bad scenarios where one wakes up. To start with, there are two scenarios Urban highlights as possible ways one could be revived. One is with nanotechnology, if we eventually develop extremely small robots that can approximate repairs to all of the damaged parts of your body. This scenario may or may not be more likely than the other possibility, but it is not all that pertinent to my arguments. If this is how revivals tend to happen, I think they will probably go fine, even great, for the subjects. The other possibility is whole brain emulation, wherein the information about someone’s brain structure is used to create a simulated brain that is psychologically identical to the meat brain. This is the scenario where most of my concerns are. Since I have been interacting with Robin Hanson’s work more often recently, I feel that it is only fair to note in advance that he wrote an entire book speculating in detail about what futures involving full brain emulations would look like. I have not read this book, and don’t know how much of my own thinking is covered/weakened/strengthened in it.
I know that speculation on future scenarios in which brain emulations undergo mistreatment are not novel. One of the most notorious and mocked thought experiments in this region is “Roko’s Basilisk”, in which it is speculated that a future superintelligent AGI will create and torture full brain emulations of everyone in the present who did not work as hard as they could to protect its future existence. There are other, perhaps more plausible speculative scenarios in which emulations are tortured intentionally, for example if future religious cults decide to create heaven and hell simulations to put emulations in, or if people of the future despise people from the present day for values and actions we consider normal now but which people in the future will consider despicable. Another, less severe set of possibilities I have seen speculation on in some science fiction is if emulations are sold and used as slaves of some sort, for instance if AGI proves hard to control or produce. As with Roko’s Basilisk, I consider this specific possibility really unlikely, but a related, possibly more likely one is if emulations are bought and sold as IP for use in in silica research.
Specific possibilities of this sort are all sort of speculative and hard to take that seriously, so what are the more general threads these obscure situations share that are worth worrying about? There are two in particular that I find significant. One is that emulations of past brains might not be afforded the protections of other humans, and so will be susceptible to worse situations. The simplest version of this is to observe that the interests of those unable to influence policy aren’t reliably represented by it. Indeed this is the cause of many of the current moral problems I consider most severe, like factory farming and the worsening extinction risk landscape.
Tim Urban made the counterpoint that there would be people at the time who were committed to cryonics programs, who could continue pushing to preserve the values of cryonics organizations, and that there may be even more of them since the time leading up to possible revival would presumably see the prospects of this revival seem more and more likely. To an extent I think this underrates the fact that reviving or protecting people who would die from aging, illness, or injury in a given time is presumably much easier than reviving those who “died” and were preserved. If so, this suggests that there will be a long period when people have much less motive to commit to cryonics anyway, as they may already have what is essentially indefinite life except for in the case of those severe accidents when cryonics is unlikely to be viable anyway.
Indeed we may be in the narrow strip of time when people could engage in at least possibly effective cryonic preservation, but before it becomes obsolete. In my opinion the most likely outcome is that, even if revival becomes technologically feasible at some point, the bodies will end up lost or badly damaged in some way before then, but if they aren’t, the cryonically preserved may be a relatively small, voiceless, ancient minority.
There are various rationalizations that might be given at this point for why the preserved don’t deserve full rights. For instance, as mentioned, future people might despise present people for their values in some way, or if it’s long enough in the future and/or human enhancement has developed significantly since then, past people may be considered primitive. Sub-transhuman. 2
Another possibility is that brain emulations wouldn’t be given full human rights, and while the cryonically preserved people themselves wouldn’t be considered inferior, emulations based on them would be, and could even be considered public domain sources for emulations, or the IP of the companies holding the preserved brains. It might seem strange to be worried about the possibility of eventually being an emulation denied human rights, in an era when contemporary people might be emulatable, and wouldn’t, in this hypothetical, feel connected enough to their emulations to set out to protect their rights. After all, if they don’t feel connected to their emulations, should you? 3
I wouldn’t take comfort in this thought. There are strong reasons why you may have to worry for your future emulations even in a world where contemporary people don’t. One of the classic thought experiments in the philosophy of personal identity involves a transportation device, which destroys a person in one place, and rebuilds a perfect replica of them in another. In this version intuitions about whether this person survives may diverge, but the common next step is to imagine the same machine, except with the modification that it doesn’t destroy the original person. In the second version, the intuition is very strongly in the direction that the original body is the same person, and, therefore, the exact replica is a different one. This can be compared in intuitive ways to emulation scenarios.
An emulation/cryonics scenario is analogous to the first version of the transporter, while cases where a living person’s brain is emulated are closer to the second version. It is my view that the intuition that the emulation is less of the same person than the original body is a mistake. In particular, if we imagined a world in which, in fact, the person only survived in the upload and not in their body, it would feel internally identical for all involved (in terms of psychological evidence of identity) to the default interpretation that they only survive in their own body. This would mean that an emulation of a cryonically preserved mind should be viewed as a continuation of the same person. However, even though this also means that emulations of corporeal people are best thought of as having as much claim to be the original person as anyone, the intuition may dissuade corporeal people of this anyway, and the emulations, once made, might be contained in a way that makes strong self-advocacy futile.
This is of course only relevant if living corporeal people can be emulated anyway, and it seems as though it would be much easier to emulate preserved brains (which could after all be thinly sliced, put through chemical baths, and processed over longer periods, which would not be survivable for living brains). If living people could not have their brains emulated, they wouldn’t even have a theoretical rational self-interest in protecting the rights of emulations. More to the point however, regardless of whether or not we should actually think of emulations of cryonically preserved brains as the surviving original mind or not, the only case in which the emulation scenario is relevant to arguments in defense of cryonics are those in which, to some degree, it turns out to be a form of real survival.
And now we get to the second premise of my general argument for concern. Brain emulations are presumably replicable. Perhaps you won’t wind up waking up in a world in which cryonic brain emulations have no rights and are casually used in ways you wouldn’t want, but this emulation could be copied.
A very plausible future is one in which you don’t have a world dominated by vengeful moralists looking to punish historical people for bad values, or religious cults judging people into digital heavens and hells, or unethical researchers picking apart and using digital minds, but you might live in a world where all of these groups exist, alongside all of the more benign or positive groups, and where your information might be likely to be reused by many of these groups over time. People of the time likely won’t worry about this too much even if they can be emulated as well, for the reasons already given, but it seems as though every feature of the original emulation that might constitute “survival” for you would be shared by every other copy in existence.
I don’t know the relative likelihood/frequency of different negative reasons someone might want to revive you in the future as an emulation, but while some of the specific negative hypotheticals I have suggested seem unlikely, so do the prospects of being revived for ordinary survival at all. Plausibly some cryonically preserved people will remain preserved long enough to be contemporary with the technology needed to resurrect them (assuming such technology is ever successfully developed), I suspect most will be lost in the intervening time one way or another.
If there are only a few preserved people, and the relevant organizations have the funds and continued value alignment necessary, the odds of being revived are at least decent, but even making it this far does not guarantee you will be resurrected. If you are, it might be because of a value aligned organization, but you are at the mercy of whoever currently has the motives and capability to resurrect you, as well as whoever is able to copy you subsequently. You might wake up at once into a million heavens and a million hells across the whole future of your surviving brain information.
I can’t think of particularly dangerous future risks that could happen with nano-technology revival, and if you are both resurrected as an emulation and in a nano-healed physical body, perhaps this seems enough like the second version of the transporter, in which your body is replicated but not destroyed, that the worry is not as intuitively strong in this case.
I am not totally sold on this though, and it seems unlikely to me that you would be both emulated and repaired, more likely your body would be destroyed or discarded after the emulation process. It at least seems very reasonable not to want to pay for cryonics, in order for a chance of both wonderful and terrible futures. I don’t consider this enough to definitively establish cryonics as irrational, but it does make it seem at the very least rationally optional to me, and not something I am likely to sign up for.
The framing of cryonics I have seen from more death-parochial figures like Eliezer Yudkowsky often leaves me uncomfortable. In his email thread on his brother Yehuda’s death, for instance, he describes talking to his remaining loved ones about cryonics, and how they “chose to commit suicide, as expected.” I do not entirely object to this characterization, except for its use as a dismissive argumentative cudgel, or even non-central fallacy. His are not even the most upsettingly dismissive comments in this thread (and given the circumstances, they are by far the most understandable in this respect). Far more uncharitable are the words of Michael Dickey, who also described the resistance of those he has tried to talk into cryonics:
“most grope for excuses not to, disguising their disregard for their own existence with appeals to mysticism or dystopian futures.”
While I often see appeals I consider mystical that I find hard to take seriously, the offhand mention of dystopian futures suggests Dickey’s friend might have even had very similar concerns to mine, which were subsequently dismissed as a form of denial. Perhaps if suffering didn’t exist, I could understand people being parochial about death to the point of calling all critics self-deceiving, but as it happens, while both have some deep intuitive appeal to me, I am much, much more sympathetic to parochialism concerning suffering, at least extreme suffering, than parochialism considering death, and there is a pretty natural tension between these two values in many cases. To call appeals to dystopian futures filled with unknown, possibly extreme, suffering a form of denial seems to insist that concern for suffering is merely some sort of coping mechanism for ignoring the true value system of preventing death. (And indeed, some of Dickey’s bizarre comments about Buddhists opposing suffering seem to support this interpretation or something like it).
Even the framing of cryonics as analogous to an experimental, long-shot drug treatment doesn’t seem apt to me considering all I have brought up. The weirdness of cryonics makes it neither wrong, nor just some creepy Silicon Valley cult as I often see it characterized, in my view, but taking the weirdness seriously means committing yourself to an unknown far-future civilization’s mercy, in a position of potentially great vulnerability to oppression relative to the others who will live in this time.
In his writing, Tim Urban denies resurrection is the appropriate characterization, but it doesn’t really change any material aspect of the argument what you define death as. ↩︎
Ed. Note: Another possibility: people do want to revive past people, to study them. This could also be dystopian for the revived people, though. ↩︎
Ed. Note: There’s an analogy to children here, in how their parents care about their fates, but it’s not very strong in this instance. For one, emulations are more psychologically distant to almost anybody who would plausibly create them, compared to children’s distance from their parents. ↩︎