The Case for Metaversalism

Details of the rationalistic arguments in support of Metaverse Theory


One of the most important features of Metaversalism is that, unlike the case with all previous religions, and most previous philosophical systems, the fundamental precepts of Metaversalism's metaphysical framework can be placed on a sound rationalist foundation, based on some of the most deepest and most sophisticated knowledge that has been gained by modern empirical science and mathematics (including, importantly, theoretical computer science).  Our claim is that the principles of deductive and inductive reasoning that have proven to be so successful in science can be applied, together with various mathematical and scientific facts, to derive our entire metaphysical framework - with no arbitrary, ad-hoc elements or unwarranted assumptions required.

    In this page, we will attempt to give details of this case for Metaverse theory, and explain why we believe that any intelligent individual who is sufficiently well-equipped with a sound, broad understanding of modern science and mathematics should be naturally led to this philosophy.

1. Some Important Scientific Facts to Know

In this section, we begin by reviewing some important facts to know from modern science, including mathematics and computer science as well as empirical science.

1.1.  Important Facts about Mathematics 

Here are some important facts to be aware of about pure mathematics, including the foundations of mathematics and logic.

1.1.1.  Math is perfectly-precise reasoning.   

Mathematics, as a field, is, most generally, about everything that can be deduced, with certainty, starting from perfectly precisely-defined abstract concepts, and perfectly unambiguous and precisely-stated axioms, and applying precisely-defined rules of deduction.  That is, mathematics is logical reasoning in domains that have been stripped of all linguistic ambiguities - in other words, all meaning associated with the concepts to be discussed is laid out explicitly.  Mathematics thus has a mechanistic aspect - no fuzzy, indefinable vagaries are involved in its essence.  Therefore, in an important sense, math is equivalent to computation - to be discussed.

1.1.2.  Math is independent of language.   

That being said, the concepts of mathematics are not only devoid of linguistic ambiguity, but they are also independent of language altogether, in the sense that any sufficiently rich and powerful language could be used to express and explore the very same set of mathematical concepts and results.  Any given mathematical theory or structure can be equivalently described in an infinity of different ways.  (This is known due to results in computer science.)

1.1.3.  Math is reducible to set theory and logic.  

 It has been shown that all of the important subfields of modern mathematics can be reduced to one or the other of several fundamental theories, most notably Zermelo-Fraenkel set theory (ZF), or its extension, ZF with the Axiom of Choice (ZFC), together with standard logical deduction systems of 1st-order or 2nd-order predicate logic.  This provides a concrete example of a foundation for mathematics, although many other foundations are possible.

1.1.4.  Math contains infinite surprises.   

There are some important limitations to the reduction of mathematics to logic.  Goedel's Incompleteness Theorem tells us that any sufficiently powerful and self-consistent mathematical theory must be incomplete - meaning that some truths of the theory cannot be proven by any finite proof, and would require an infinite amount of knowledge to definitively decide them.  Finite beings can discover such truths only by guessing, and so we will always have the possibility of discovering new mathematical surprises (as when a statement previously assumed true is found to be false).

1.2.  Important Facts about Computer Science

Theoretical computer science can be considered a branch of mathematics, but we present it in a separate section here because its key results are so central and important.

1.2.1.  Computation is perfectly-precise information processing

Theoretical computer science involves the study of perfectly-precisely defined abstract mechanisms that follow perfectly-precisely defined rules for computing perfectly-precisely defined new information from perfectly-precisely defined old information.  (This turns out to be entirely equivalent to what we are doing in mathematics - it is just described in a somewhat different way.)

1.2.2.  Computation is Universal

Like mathematics itself, the concept of computation is universal, in the sense that it does not matter what framework or language you use to define it.  Any sufficiently capable model of computation that could be physically implemented within our universe is capable, in principle, of performing exactly the same set of computations (when available memory is not a concern).  This observation (which is really an inferred law of physics) is known as the Church-Turing Thesis.

1.2.3.  Computation is equivalent to mathematics.

The concepts of computation and those of traditional mathematics are really just two different aspects of the same underlying realm, which we may generically call the realm of formal systems.  This is because a computational process can explore the results of any mathematical theory, and a mathematical theory can describe the workings of any model of computation.  If there is a difference between the mathematical and computational perspectives, it is only that mathematics emphasizes the results, while computation emphasizes the process that is used to obtain the results.  But they are, at worst, just two sides of the same coin.

1.2.4.  Computation contains infinite surprises.

Just as Goedel showed that finite proofs in mathematics cannot establish all true results that may depend on infinitely many cases, Turing showed that finite computations cannot predict in advance the results of other computations (whose run time may not be bounded).  In other words, there is a "no shortcuts" theorem - in general, the only way to obtain the result of a given computation is to actually work through all of its steps; and there is no way in advance to even predict in general whether the number of steps that will be required to do this will be finite or infinite.

1.3.  Important Facts about Empirical Science

Empirical science is different from mathematics and computer science in that the former rely, in principle, upon the exercise of pure reasoning alone, while empirical science relies for its results upon observation of the world along with formulation, testing, and evaluation of explanatory theories.  But here are some important general laws that have been abduced by empirical science. 

1.3.1.  The Simplest Explanations are the Best

The above is perhaps the most important lesson learned as a result of the centuries of empirical studies throughout the history of modern science.  First explicitly framed by Sir William of Ockham, this principle is sometimes called "Occam's Razor."  Sir William said, "Thou shalt not multiply entities beyond necessity."  An example of what not to do is provided by Ptolemy's epicycle model of planetary motion, which involved a complex system of "circles upon circles."  This rather baroque model was an accurate summary of the available data, but did not provide any real explanatory power; it would not have allowed one to easily predict the motion of a newly-discovered planet, for example, since each planet's motion was described by its own independent set of epicycles, specified using many arbitrary-looking parameters.  In contrast, the parameters describing planetary orbits under Newton's laws are fewer, and the complete orbital trajectory of a new planet can be predicted after just a small set of observations.

Now, it's important to realize that the modern understanding of Occam's rule is rather more sophisticated than the simplistic maxim "don't multiply entities" (circles, in Ptolemy's case).  The real issue is that, to have predictive power, a theory must be smaller and simpler (in terms of its information content - words, equations, lines of computer code, or whatever) than the data it purports to summarize - otherwise, it is just a restatement of the existing data.  Furthermore, the simpler the theory is to describe, the more likely, we have found, that it will turn out to be correct (as long as it is not too simple to be able to reproduce the available data).  This rule (sometimes also called the Minimum Description Length or MDL principle) has turned out to be successful, time and time again, across every field of science, and the fact that it works so well tells us something very deep and important about the nature of reality.  One of the goals of Metaverse Theory is an attempt to give a simple explation of why Occam's Razor works, and not just use the fact that it works.

1.3.2.  Our Universe has a Mathematical Model

This observation is abduced from the successes of physics over recent centuries, and especially in the 20th century.  As physicists have dug more deeply into the mechanisms of nature, they have found, time and time again, that each new physical phenomenon that was discovered turned out to be amenable to a quite accurate description in terms of concepts from the world of pure mathematics.  The enterprise of reducing physics to mathematics is not yet complete, but the great successes that have been achieved so far (explaining the results of essentially all physics experiments that we can do) give us great confidence that ultimately, the entire universe will turn out to be very accurately or even perfectly modeled by some mathematical structure or other.  Furthermore, due to the equivalence of mathematics and computation, this implies that the unfolding of the details of our universe is also equivalent to a computation.  (And really, to infinitely many different computations that could produce the same results.)

1.3.3.  Conscious Minds are Computational Systems

This one is perhaps the most controversial and/or surprising to the average person on the street.  But, all available evidence from physics, neuroscience and psychology continues to indicate that, however complexly our subjectively experienced stream of consciousness may be implemented in detail, it is ultimately nothing but the playing-out of some information-processing mechanism or other within the brain.  There is no scientific evidence whatsoever that the experience of consciousness requires the involvement of some mysterious "soul" that somehow transcends the bounds of what could be accomplished by some physical information-processing mechanism.  

Indeed, the very idea that there is a source of will (decision-making power) that cannot be described mathematically, i.e., as a computational process, appears to have been a logically incoherent one from the beginning - for if the will is not determined in any mathematically describable way, then by what magic can it possibly be determined?  What else is there?  Computation is the ideal language of process; mathematics is the ideal language of conceptual description; if it is not a process, and has no conceptual description, can it really be anything at all?  Even allowing for the role of randomness in physics does not help us escape this conclusion, since even a purely random process has a mathematical description as such, and anyway, pure randomness would hardly be considered a satisfying source of will, due to its pure capriciousness.  

In case one is skeptical about the comprehensiveness of  the  existing psychological research, it is important to note that this argument is really just one from pure reason, and does not even rely on the empirical data that supports it.  The important point to note here is that, even if it turns out modern physics is incomplete, and even if it fails to account for various as-yet-undiscovered fields and forces that might allow spiritual energies to flit about invisibly between bodies, presumably there would still exist some more complete, undiscovered system of physics that could encompass the behavior of these spirits, as well, within some more complete mathematical/computational framework.  Because if the world of the spirits does not work mathematically - how else could it possibly work?  Again, what else is there, besides mathematics, that could possibly form a concrete, definite foundation for any system?  We cannot put our finger on any other possible foundation for the reality of mind, other than the foundation of pure logic and reason that we already know is there.  Any conceivable basis other than a mathematical/computational one would seemingly be incapable of ever yielding anything other than, at best, some vague sense of fuzzy, undefined unreality.  Certainly, science knows of no concrete examples of phenomena that have not been found by physics to have any underlying basis in mathematical laws.

1.3.4.  Probabilities are Central to Reality

This is primarily a lesson of modern quantum physics, although hints of it could be found already in the statistical mechanics of the late 19th century.  Quantum physics tells us that multiple, divergent versions of reality exist, each with its own corresponding probability; these probabilities summarize the apparently random pattern of results obtained from certain kinds of experiments.  These diverging possible realities are real, as surely as we can say this about anything else we can see and measure in the world, because the mere existence of the alternate possibilities has definite, measurable effects on the results of experiments.  In other words, quantum possibilities are also actualities, as far as we know, although we and our experiments subjectively find ourselves trapped within individual possibilities, and can only indirectly detect the existence of the others.  But the other possibilities (both for our experiments, and for the state of our minds) are really there too, as far as we can tell.

The lesson of all this is that, in order for us to make sense of our memories of our conscious experiences (or our records of experiments), we cannot expect to be able to predict everything with certainty, but instead we have to have a theory of reality that predicts probabilities for experiencing different outcomes.  (Note that, just as in the worlds of mathematics and computation, in physics as well, the inevitability of eternally being surprised turns out to be fundamental.)  

Armed with a probabilistic theory of reality, we can compare it with past results, and see if the theory correctly "retrodicts" the statistical pattern of results that was observed.  If it does, then we gain confidence that the theory will correctly predict our likelihood of subjectively experiencing certain outcomes in the future.  This whole paradigm works even if there are multiple possible branching futures for us, all real in their own right, as long as each possible future can be assigned a numerical weight that can, in retrospect, be compared with our memory of which outcome actually happened, later on within our subjective stream of thought.

Later, we will see that this centrality of probability discovered by physics gives us clues about how to handle a more general metaphysics of consciousness, even outside of the specific framework of quantum mechanics.  It is this generalized probabilistic metaphysics that will later enable us to speak cogently about life-after-death scenarios that lie outside the provenance of traditional science.

2.  Metaversal Ontology

Our review of contemporary science aside, let us turn towards the development of our metaphysical system.  The first question for our metaphysics is the question of ontology:  What entities exist?  What kinds of existence are there?  What does existence mean?

2.1.  Occam's Razor and Predictive Metaphysics 

Informed by the successes of Occam's Razor in science, we will adopt it as a guiding principle in the metaphysical domain as well.  In other words, we will accept, as a fundamental assumption, that the simplest ontological theory to describe is the one that is most likely to be correct, in the sense of having the greatest expected degree of predictive power.  

Please note that predictive power remains an important desideratum even for a metaphysical theory, since an individual conscious being may be interested in knowing, for example, whether he can expect to experience an afterlife - and the answer to this question may depend on his metaphysical environment.  Certainly, traditional physics does not provide any clear means for attaining an afterlife within the bounds of known physics, so if there is any road to an afterlife, it would, most likely, have to be a metaphysical one.

Moreover, the correctness of a metaphysical theory can even be empirically validated by an individual conscious being, or even on a consensual basis by an entire civilization.  Here is how.  Suppose that a given metaphysics predicts an afterlife, not just for an individual conscious mind, but also for any larger sized information-processing system, including even an entire civilization.  Now, suppose that a given being or civilization faces imminent physical destruction (e.g. by an approaching asteroid), but then finds itself to have miraculously survived beyond the expected moment of its death.  The nature of its new environment, and the condition under which it finds itself surviving may have been predicted by various metaphysical theories - these predictions can then be compared with the new (post-death) observations.  The being or civilization then gains confidence in the theories whose predictions were consistent with the observations.  Later on in the afterlife, the being/civilization may face destruction yet again, under the terms of the new "laws of physics" (whatever they are) that obtain within its new environment.  So, the process repeats.  Eventually, after experiencing many deaths in many different universes, the being or civilization may have substantially narrowed the field of its viable metaphysical theories to a relatively narrow range of theories that successfully predicted the statistical distribution of that entity's past circumstances of resurrection.  Thus, a metaphysical theory can, in principle, be empirically tested and validated, in much the same way as an ordinary physical theory can - it just requires our dying multiple deaths to do it.

So, since a metaphysics is emprically testable (if it predicts an afterlife, at least), then we should expect that empirically-successful principles such as Occam's Razor should be expected to be useful in guiding our search for a correct metaphysics.

2.2.  The Simplest Possible Ontology

With Occam's Razor in mind, let us attempt to construct the descriptively-simplest ontological system that succeeds in explaining the raw data of our experience.

2.2.1.  The Number of Fundamental Types of Existence 

First, we must decide how many fundamental types of existence there are.  For anything to exist, there must be at least one fundamental type of existence.  If this is sufficient for explaining our experience, then by Occam's Razor there must be only this one type of existence at a fundamental level, since a theory involving multiple types of existence would be more complex to describe than a theory involving only a single type of existence.  So, we assume that our working ontological theory will require only one fundamental type of existence.  

We repeatedly say "fundamental" in the above paragraph because it would be perfectly OK to define other, derived, existence concepts, purely as a matter of linguistic economy, that rest on the fundamental one, but are not separate from it - we will do this ourselves later on.

2.2.2.  Mathematical Existence 

Now, since we have only one fundamental type of existence to work with, we see that the entities that we already know about from pure reason must have this type of existence.  These entities include things like: Mathematical theories.  Mathematical structures.  Computations.  Such entities can be considered to have earned the "right" to existence of some sort, because they have stable, well-defined properties, properties that are independent of whatever language we might use to describe them.  Any sentient being in any possible universe could investigate a given mathematical theory or object or computation, using a language of his choosing, and the results would be the same.  Mathematical objects and theories thus have ontological status.  Indeed, it is common practice among mathematicians to say that such-and-such mathematical object "exists" as long it has self-consistent, well-defined properties.  We will call this type of existence "M-existence" as an abbreviation for "mathematical existence," and we adopt it as the single type of existence in our ontology.  All mathematical objects, structures, and computations that could possibly be self-consistently described are thus considered to exist (meaning M-exist), by definition.  We name this vast universe of possible mathematical entities the Metaverse.

2.2.3.  Infinite Sets of Extants are Just Fine by Occam 

Note that our having suddenly admitted an infinite collection of objects to existence does not itself constitute a violation of Occam's Razor.  Why not?  Because the description of this theory was short.  To declare that infinitely many objects have existence does not require an infinitely long description; we simply say "all" mathematical objects exist; the objects themselves can be implicitly generated, e.g., by invoking an automatic enumeration of all finite descriptions.  In fact, this is the simplest possible theory that admits mathematical objects at all, since to provide existence for some mathematical objects, but not others, would require additional  descriptional complexity in the theory  to pin down which objects exist, and which do not.  Again, it is the descriptive complexity of the theory itself, and not the number of objects invoked by the theory that is the true quantity to be minimized by the Razor.  The successes of empirical science have demonstrated this principle, over and over.  As a classic example of this, Newton's physics invoked the existence of an uncountable collection of points of spacetime, indexed by real-valued Cartesian coordinates.  But this requirement in itself is very simple to state, so the mere hugeness of the number of spacetime points does not count as a strike against Newton's theory, which indeed turned out to be enormously empirically successful.

2.2.4.  Mathematical Existence is Sufficient

Einstein once remarked that every scientific theory should be made as simple as possible, but no simpler.  Even the simplest theory is useless if it does not successfully explain the data.  Let us now consider whether the ontological theory above (all mathematical objects exist, but nothing else exists) is sufficient to explain the data of our experience.

We saw from our review of empirical science that (1.3.2) physics strongly suggests that our universe has a mathematical model - that is, one that is either exact, or else is so accurate that it can explain all our physical observations.  Thus, our ontology already gives us that a mathematical structure (namely, the model of our universe) exists that has details within it that are indistinguishable from the observable details of our universe.  

Moreover, we saw (1.3.3) that our conscious minds themselves are nothing but computational processes embedded within the universe.  Therefore, all details of our conscious thought processes are also contained within the aforementioned mathematical model of our universe.  Therefore, nothing that you can consciously experience can possibly constitute evidence that anything exists beyond the model itself - if it did, that would mean that the model was not really a complete model of your consciousness, and therefore of the universe, and either 1.3.2 or 1.3.3 (or both) would have to be wrong.  

Furthermore, and even more strongly, for your conscious experience to not be perfectly captured within some mathematical model again gets back to the issue mentioned earlier - if it is really not mathematically describable, in any way, then can it really be anything at all?  To say it is not mathematically describable is seemingly to say that it has no coherent description, that it makes no sense as an object, that it has no self-consistent set of properties, that there can be no firm, hard reality to it.

The upshot of this argument is that it is perfectly sufficient to assert that mathematical existence is the only kind of existence.  Moreover, it is impossible and totally inconceivable that any physical experiment or act of self-reflection could ever be argued to constitute evidence that there is any other kind of evidence beyond mathematical existence.  Therefore, to believe in any other fundamental kind of existence beyond mathematical existence would be nothing but an act of pure faith, and one which would "multiply entities" (namely, separately-described concepts of existence) in a way that is totally unnecessary to explain any actual data.  It would therefore be a classic and egregious violation of Occam's Razor, the most fundamental principle of scientific thought.

Thus, we conclude that mathematical existence is the only kind of existence. 

2.2.5.  The Existence of the Universe is Explained

From the above, we immediately have the following quite adequate explanation for the existence of our universe, and of our conscious experiences within it:

  1. Our universe is nothing but a mathematical structure (or, perhaps more precisely, it could be said to correspond to the set of all possible mathematical structures that are consistent with all of our observed data).
  2. Our universe exists (M-exists) simply because all mathematical structures must exist (M-exist), by the argument of 2.2.3.
  3. The M-existence of our universe is sufficient to explain our experience, by the arguments of 2.2.4, backed up by the scientific evidence cited in 1.3.2-1.3.3.

Thus, any need to invoke an Intelligent Designer or other deity for the creation of our universe is immediately removed - since the existence of the universe already follows from a simple application of pure reason and the empirically successful Occam's Razor.

Furthermore, the fact, as has been observed by science, that the physical laws and parameters of our universe seem to be finely tuned to permit the evolution of intelligent life as we know it, is immediately explained by an application the Anthropic Principle - not only our universe exists, but all possible (mathematically describable) universes exist; however, it is just that the universes that do not evolve intelligent life-forms don't have any observers within them to notice that their laws aren't so finely tuned.  So, of course the universes that have observers within them are just those in which the parameters are finely tuned to support the evolution of conscious observers.  It really could not have turned out any other way, and, in retrospect, it should not even be surprising to us.

2.3.  Explaining the Empirical Success of Occam's Razor using Probabilistic Metaverse Theory

The above explains why a universe with conscious beings in it exists - but some aspects of our universe are left unexplained.  For example, why is our universe one in which Occam's Razor appears to work?  For, if every mathematically describable universe exists, then a universe that has much more complex laws than ours would also exist, and in such a universe, we might not find Occam's Razor to work quite so well in practice.  The logical extreme of this is a universe in which the laws are so complex that everything in our environment looks totally chaotic and random, and a functioning consciousness like ours could probably not even persist for very long in such an environment.  Yet, if such universes exist, then a consciousness like ours could find itself there equally well - at least for a brief moment.  Why then, after each moment of consciousness, do we not find our environment rapidly disintegrating into chaos around us?  Or, perhaps more to the point, why do we not have memories of a chaotic, unpredictable past leading up to a brief moment of coherent awareness, just before the present time?  Instead, we see and have memories of a stable environment with simple laws persisting around us.

The answer must have something to do with the role of probabilities, as being central to reality, or at least to a conscious being's subjective perception of reality - the empirical point that we noted in sec. 1.3.4 above.  Universes with complex, chaotic laws must be less probable, at least from the subjective perspective of conscious beings within them, than universes having stable, simple laws.

To explain why this is so, we therefore demand of our metaphysics that it must provide some means of assigning probabilities to universes.

The problem of counting possible universes is actually somewhat of a classic problem in physics, as it arises in statistical mechanics in the context of counting the number of distinct configurations of particular physical systems - which is necessary for predicting measurable physical properties in the thermodynamic domain, such as entropy, heat capacity, and so forth.

However, in an ontology that allows all mathematically describable universes to exist, the problem becomes rather more complicated.  We now have to assign probability measures to possible universes, even ones outside the framework of quantum theory, which may be described in an infinite variety of ways - including as substructures contained within an infinite variety of other, larger structures.

At this point, our metaphysics becomes a little more speculative.  We assume, as our best guess and working theory, that the probability of a given universe has something to do with how frequently that universe occurs as a substructure within other possible mathematical structures within the Metaverse.  (Although it is not yet clear precisely how this "frequency of occurrence" quantity should be defined, within this infinite domain.)  

The reason for this choice is that there are strong reasons from information theory to expect that structures having simpler descriptions will arise more often, by accident, as part of larger structures.  (For instance, a short fragment of computer code will appear more frequently, within a long random string of bits, than a longer fragment will.)  Thus, the empirical success of Occam's razor within our universe is (potentially) explained:  Universes with simple descriptive laws arise more often by accident within various mathematical structures (including embeddings within other universes), and therefore such universes appear more frequently throughout the Metaverse, and therefore they have higher probability, and so a randomly-selected conscious observer somewhere in the Metaverse is more likely to be part of one of these universes than to be a disembodied brain that emerges momentarily in the midst of a sea of random chaos.  That is why we, as typical conscious observers, find ourselves in a world in which Occam's Razor works reliably, and both our memories of the past and our subsequent experiences of the future bear that out.

We are thus led to a metaphysics in which all possible universes (and perhaps more generally, all possible mathematical structures) exist, but some exist "more strongly" (i.e. with a greater probability, or "degree of reality") than others.  Specifically, structures with simpler descriptions occur far more often by accident, scattered around the Metaverse, as part of larger mathematical structures or computations.  This Metaversal probability distribution over universes has a direct bearing on the likely past memories and future experiences of a "typical" conscious being within the Metaverse, conditioned on whatever additional characteristics (being "like us", for example) we may wish to impose.

We call this general ontological framework Metaverse Theory.   It is our hope that further mathematical investigation of the theory will determine more specific rules for assigning probabilities to universes, and hopefully discover that only one system for doing so is logically self-consistent.  If that is the case, then it will give us a potential means for determining the probabilities of other possible universes that lie outside the narrow scope of our own universe's laws of physics.  

In the next sections, we will explore the applications of this type of  mathematically-based, probabilistic metaphysical system in addressing spiritual and theological questions.

3. Spiritual & Theological Implications

[to be written - talk about streams of consciousness continuing eternally in alternate universes - talk about gods observing our universe and choosing which beings to resurrect in an afterlife]