AI to Infinity and Beyond
Humanity’s survival rests on the shoulders of giants (giant AI robots to be specific)

(Edit: I was made aware of an error I made in the probability below that has since been corrected. The chance of eternal consciousness plus the chance of not having eternal consciousness can’t be greater than 100%.)
The Conscious Wager
Imagine there is a button in front of you. If you press it, consciousness has a 99.999% chance of going extinct within a billion years, with a .001% chance of eternal life. Do you press it? Most likely people will choose to avoid pressing the button. Now, imagine prior to pressing the button, you are informed that consciousness already has at least a 99% chance of going extinct within a billion years, with a 0% chance of eternal life. Now do you press it? I think you definitely should in this case and I think most people would agree to do so as well.
I will argue why AI gives consciousness its best chance of eternal survival and why (if I am right) we should prioritize building AI as soon as possible. After all, infinite expected value with a few added magnitudes of risk is still infinitely better than a world where extinction is guaranteed. Before I do that, though, let’s examine the likely doom of consciousness!
Humans Will Almost Certainly Go Extinct
There is virtually no doubt about the natural existential end of humanity in the far future whether it be by boiling oceans, the red giant phase of the sun swallowing the earth, asteroids colliding with earth, gamma ray bursts, or the eventual heat death of the universe aka The Final Curtain (dun dun dunnn!).
However, if we are to have a chance at surviving the near inevitable demise of the consciousness, I believe it will reside in combined effort of human will along with the usage of artificial intelligence. Nevertheless, there are still reasons to think that consciousness will go extinct much sooner even with human will and the existence of AI.
1. Doom and Gloom
There are a multitude of ways humans will likely go extinct before natural causes bring about our end.
1.1. The Vulnerable World Hypothesis
There is a hypothesis that suggests civilization’s technological progress is like an urn of inventions with three types of balls in it; white, grey, and black.1 Humans inventing new technologies is like pulling these balls out of an urn.
Pull out a white ball and you have benefit to humanity without any (or few) costs (think Pareto-like benefits—that is, benefits with almost no downsides). An example is something like successful vaccine technology. Pull out a grey ball and you have a mixed technology that can have massive benefits or massive harms. An example is something like nuclear technology. It can be used to provide effective power generation for entire cities, or it can be used to obliterate them. Pull out a black ball, however, and you jeopardize all of civilization. The contemporary example being predicted as a black ball is AI development. Another realistic example is something like a deadly pathogen accidentally being released during gain-of-function research.
Humans will continue to develop technologies for as long as we live. Even if only one black ball is in the urn, humans will eventually pull it out and humanity as we know it, will perish.
1.2. The Precipice Calculation
Toby Ord writes:
Overall, I think the chance of an existential catastrophe striking humanity in the next hundred years is about one in six. This is not a small statistical probability that we must diligently bear in mind, like the chance of dying in a car crash, but something that could readily occur, like the roll of a die, or Russian roulette.2
The era we have entered into since 1945 is one where humans are no longer threatened only by natural risks, but anthropogenic risks too. These risks are also drastically more dangerous to humans than natural ones.
The risks of nuclear fallout, deadly pathogens, or misaligned AI, all caused by humans greatly outweigh the natural risks of asteroids or super-volcanoes. An unsettling fact about the way humans display incorrect prioritization about our own survival is that we spend more on ice cream than we do on ensuring the technologies we create do not destroy us.3
1.3. Village Idiots
There is a distant risk of nanotechnology spreading like pollen, replicating as fast as microorganisms, and consuming the biosphere to replicate itself, leaving humanity exposed without an atmosphere much needed for life.4 This is the nightmare proposed by Eric Drexler (the inventor of nanotechnology) where he first gave it the name ‘Gray Goo’.
As a more short term risk, Sir Martin Rees wagered that an instance of bioerror or bioterror will kill a million people by the year 2020.5 He lost the bet, although it is disputed (Covid-19 did kill more than a million people and can be interpreted as a bioerror at least). Regardless, this wager showed that there are high probabilities associated with individuals wanting to act maliciously towards the human species with regard to deadly pathogens.
Both of these cases speak to the problem raised by Sir Martin Rees whereby:
The global village will have its village idiots, and they will have global range.6
The general problem is that individuals gaining access to this technology will increase the chance humanity ceases to exist.
1.4. Paperclips > People
For AI as an existential risk, the risk does not require that it has malice towards humans. It can have arbitrary goals such as producing paperclips.7 If the AI determines that humans are preventing it from producing sufficient paperclips, it may get rid of humans to ensure it can achieve peak paperclip production.
This means that the alignment problem isn’t merely a problem for getting AI to avoid viewing humans as bad, it requires that AI aligns perfectly with our human interests to ensure that we aren’t swept off the earth in the same way that one would sweep dust into a dustbin to keep their patio clean.
1.5. The Moloch Trap
Scott Alexander makes a case for why safe developers of safe AI might be ran out of the market by developers of capable AI because that is what the market demands and that is what the developers will appeal to. More specifically, he says:
From a god’s-eye-view, we can agree that cooperate-cooperate is a better outcome than defect-defect, but neither prisoner within the system can make it happen.
This is the problem of market failures reframed. Basically, the problem occurs whenever individual rationality does not lead to group rationality (rationality in the instrumental sense, that is). This is a common problem in economics, and is often used as a critique of free markets (though, what is usually missed in that critique is that the alternatives to free markets also have problems of market failures, but that is neither here nor there).
1.6. Humans Are Deadly
If anything is to be taken from all of this doom and gloom, it should be that humans are incredibly deadly. I am convinced that the likeliest cause of death of the human species will be the human species itself. Whether humans are killed by governments, the use of AI nefariously against each other, blowing everyone up with nukes, releasing deadly pathogens into the air, or our own err in judgement and effort to organize against existential risks that takes us out, humans remain humanity’s greatest existential threat.
From Michael Huemer in an article he wrote about conformity:
Human beings are dangerous. They are by far the most dangerous animals on Earth, and among the most dangerous of all phenomena that you are likely to encounter in your life. It is historically very common for human beings to decide to rob, injure, or kill one another, and they tend to be very good at doing so.
2. AI as the Lifeboat
2.1. Lifeboat #1: Build an Ark
There is a hypothetical matter or arrangement of matter where it is arranged in such a way to be the most efficient for programming or computational purposes. This hypothetical matter is called computronium (term coined by Norman Margolus and Tommaso Toffoli of the Massachusetts Institute of Technology).8 Computronium allows for information processing to use little to no energy, meaning consciousness could theoretically exist in the latter stages of the heat death of the universe if it could be uploaded to such material. This will turn useless matter into useful matter that can be used for more computational effort to help sustain consciousness and solve larger problems needed for keeping consciousness eternal.
Why this requires AI: In order to make computronium, matter needs to be arranged at a deep atomic level. It will require the coordination of 10²⁴ nanobots acting simultaneously. Only extremely advanced AI will be able to handle this level of coordination.
2.2. Lifeboat #2: Lift the Stars
A possible strategy to extend the life of stars is whats called ‘Star Lifting’ where matter is ripped off of a star to keep it burning longer. This can extend the life of stars for hundreds of billions of years.9 This would allow more time to harness energy and continue keeping consciousness alive for eons. This additional time and energy will be needed to build technologies required for harvesting energy from black holes.
Why this requires AI: The process of star lifting can take millions of years. Humans will unlikely be able to concentrate on tasks requiring that level of attention for such long periods of time let alone coordinating for that long. The execution of this technology will require constant monitoring of billions of data points at any given time with near perfect synchronous movements of the involved swarm technology. AI will be the only means capable of realizing such a strategy.
2.3. Lifeboat #3: Black Holes and the Baby-Verse
There will come a time in the universe known as the black hole era when essentially only black holes exist. This era will require incredibly advanced technology required to harvest energy from the spin of black holes via the Penrose process.10 This process will be necessary to as much energy stored as possible to live out the end game of the universe, known as the dark era, for as long as possible.
During the dark era, consciousness is in a race against time to figure out how to jump into a new pocket cosmos (a baby universe so to speak). The goal is to make a big bang that leads to quarks, leptons, electrons, and eventually stars and planets. Thus, giving consciousness the opportunity to flourish once again. To do this, we need to create a false vacuum. Michio Kaku writes:
There may be still another way to create a baby universe. One might heat up a small region of space to 1029 degrees K, and then rapidly cool it down. At this temperature, it is conjectured that space-time becomes unstable; tiny bubble-universes would begin to form, and a false vacuum might be created.11
Why this requires AI: Producing black hole harvesters would require eons of dedicated focus, with non-biological husks that can survive near the ergosphere of a black hole, and react at the speed of light to make decisions on the fly (literally!). Humans are many magnitudes short of ever being capable enough to execute such missions.
For generating the new world, there would need to be particle colliders the size of solar systems with magnets at regular intervals to keep the beams in the proper configuration for atom smashing.12 Safe to say that this would be an unimaginable feat for humans. The level of complexity involved in building, operating, and maintaining everything requires a civilization far more advanced than ours with intelligence far superior to our own.
3. Do Not Go Gentle (Conclusion)
The lifeboats all appear to have small chances of success to say the least. Any one of them by themselves isn’t enough to beat the risk, they likely all need to be successful for consciousness to have any chance at survival. Tons of effort, coordination, computation, and a massive amount of luck is required for them to follow through.
Any advanced civilization will require as much time, energy, intelligence, and computing power as possible to solve the physics problems required to generate a new world. We are currently wasting nearly all of the energy in the universe that isn’t being harnessed toward the effort of eternal consciousness. Nearly all of the energy of the sun is unused by us. We are not even a type I civilization that can utilize all of the energy in our planet. The chances of us springboarding a type III civilization that can actually give consciousness a chance at forever is slim to none, and that is if we decide to accelerate AI development during our time. If we don’t then we guarantee that chance is absolutely zero.
To those worried about the black balls being pulled out of the urn, I say we are wasting the time we have now to try and defend against the outcome of the black ball pull. We have probably already pulled it, it may just be a matter of time before we realize it. To those worried about the village idiots, I say we aren’t worried enough otherwise we would be trying to get out in front of them. The only defense for a global village idiot with global range, is a global security system—something only AI can provide. To those worried about the alignment problem, I raise you the problem of misaligned humans. We are already misaligned as a species and as the greatest hope to keep consciousness alive throughout eternity, our misalignment is worse than misaligned AI. To those worried about Moloch, I say we are already experiencing Moloch as existential risk to consciousness itself. We should be trying to defeat Moloch.
As Scott Alexander writes:
In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it’s on our side, it can kill Moloch dead.
Even if it isn’t how he meant it, I believe it can be interpreted (in a way) that AI can help prevent the very existential risks posed by AI itself.
Humans will go extinct, that much is guaranteed. However, we can still give consciousness the best shot it has at surviving the cosmos. We should accept the added risk of going extinct sooner in order to accelerate AI development and increase the chance that consciousness exists for eons longer than it would otherwise. Given that humans provide the best case for extending all known consciousness in the universe (at least until something becomes known as a better candidate), humanity’s demise will also mean the very likely end for all sentience as we know it.
Lucky for us, we are not extinct! This means we have the window of opportunity—for as brief as it may be—to increase the chances of consciousness surviving as much as possible. Without AI the chance of eternal consciousness is zero, with AI it becomes infinitely higher. The expected value of eternal consciousness is infinite compared to literally nothing at all.
We can bet on the perceived safety of the status quo and all but ensure the extinction of consciousness as we know it, or we can bet on the risk of AI and purchase a fighting chance for eternal consciousness.
In the words of Dylan Thomas:
Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.
Bostrom, Nick “The Vulnerable World Hypothesis,” Global Policy 10 (4): 455–476 (2019)
The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury Publishing, 2020, p. 169
Ibid.
Ibid., 58
Rees, Martin. Our Final Hour: A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future In This Century—On Earth and Beyond. Basic Books, 2003, p. 83
Ibid., 61
Bostrom, Nick “Ethical Issues in Advanced Artificial Intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision-Making in Humans and in AI, vol. 2 (2003), 12–17
Ivan Amato, “Speculating in Precious Computronium” Science 253, 856-857 (1991)
Scoggins, M. T., & Kipping, D “Lazarus Stars: Numerical investigations of stellar evolution with star-lifting as a life extension strategy,” Monthly Notices of the Royal Astronomical Society 523, no. 3 (2023), 3251–3257
Jean-Pierre Lasota et al., “Extracting Black-Hole Rotational Energy: The Generalized Penrose Process,” Physical Review D 89, no. 2 (2014): 024041
Parallel Worlds: A Journey Through Creation, Higher Dimensions, and the Future of the Cosmos. New York: Doubleday, 2005, p. 328
Ibid., 331



"If you press it, consciousness has a 99.999% chance of going extinct within a billion years, with a 1% chance of eternal life."
I don't see how this can be mathematically possible. Surely if we have a 1% chance of eternal life, we can have at most 99% chance of going extinct within a billion years.