The Hypercomplexity Argument: It Is Likely That The Fundamental Laws Of Physics Are Millions Or Billions Of Times More Complex Than Our Current Laws Richard Price January 30, 2024 1 Introduction It is likely that the fundamental laws of physics are millions or billions of times more complex than the laws that humans have formulated so far. An informal statement of the argument for this conclusion is as follows: 1. Consider a spectrum of intelligence, where ants are at level 0, pigs, mon- keys and orcas are at level 3; and humans are at level 4. A superintelligence is at level 10. There is also level 500, level 1,000, level 1 million, 1 billion. Possibly there are infinite levels of intelligence. 2. Let us ask: ”what is the minimum level of intelligence required to under- stand the fundamental laws of physics?” 3. A monkey, at level 3 in the intelligence spectrum, cannot understand quan- tum mechanics. 4. Therefore, the minimum level of intelligence required to understand the laws of physics is between 4 and whatever level is the end of the spectrum (if the spectrum is finite). 5. In order to make the probability calculations simple, we will assume that the number of levels in the intelligence spectrum is one billion. 6. We will assume that every level between level 4 and one billion is equally likely to be the minimum level of intelligence required to understand the laws of physics. 7. By these assumptions, the probability that level 4 (humans) is the min- imum level of intelligence required to understand the laws of physics is low: it is 1 / (1,000,000,000 - 3) = 1/999,999,997. If there were only ten levels of intelligence, the chance would be 1 / (10 - 3) = 1/7. 1 8. The minimum level of intelligence required to understand the laws of physics may be much higher than level 4. It could be level 500,000,000, say. 9. The Isaac Newton of orcas, swimming around the ocean, might formulate a law ”no animal can survive outside of water.” This scientifically-minded orca might observe land in the distance, and think ”that is the edge of my universe; I don’t know what happens there. Maybe nothing happens there.” 10. Humans are in the same category as orcas: the laws humans formulate make sense from the perspective of the size of human brains, and the envi- ronment humans operate in. But, for any law, a being at level 500,000,000 on the intelligence spectrum would say ”that law hypothesized by a human is as far from the truth as laws hypothesized by orcas.” I call this argument ”the Hypercomplexity Argument”: it is likely that the universe is hypercomplex, i.e. thousands, millions or billions of times more com- plex than what humans suppose. Only minds millions of times more powerful than human minds can grasp the laws of physics. I mean the phrase “laws of physics” broadly such that the laws describe the fundamental nature of physical reality. Structurally, the Hypercomplexity Argument bears some similarity to Nick Bostrom’s Simulation Argument: • In the past, the epistemic status of sceptical scenarios was one of uncer- tainty ”how do we know that an evil demon is not deceiving us?”; ”how do we know that we are not a brain-in-a-vat?” • Nick Bostrom’s Simulation Argument provides probabilistic evidence that a sceptical scenario exists. • Similarly, in the past, many have wondered “how close are the laws of physics that we have formulated so far to the fundamental laws?” • The Hypercomplexity Argument provides probabilistic evidence that the fundamental laws are far off from the laws we have formulated so far. I noticed the similarity after formulating the Hypercomplexity Argument, and reflecting on its structure. The Hypercomplexity Argument was inspired by a conversation I had with the physicist Martin Rees. I was chatting with Martin Rees, and asked him ”What do you think happened before the Big Bang?” Martin Rees answered ”I don’t know, and even if I was told the answer, I am not sure I would be able to understand it. After all, a monkey cannot understand quantum mechanics.” In the rest of this paper, I will seek to motivate the premises of the Hyper- complexity Argument, and I will discuss some responses to it. 2 2 Formal Statement of The Hypercomplexity Ar- gument Below is a formal statement of the Hypercomplexity Argument. 1. Humans are at the low end of the spectrum of possible intelligence, which has millions or billions of levels, and may be infinite. 2. Every level in the spectrum of intelligence, above level 3, is equally prob- able to be the minimum level of intelligence required to understand the fundamental laws of physics. 3. If premises 1 and 2 are true, then it is likely that the beings that possess the minimum level of intelligence required to understand the fundamental laws of physics are millions of times higher on the intelligence spectrum than humans. 4. It is likely that the beings that possess the minimum level of intelligence required to understand the fundamental laws of physics are millions of times higher on the intelligence spectrum than humans. Below we will consider premises 1, 2 and 3. 3 Premise 1: Humans are at the low end of the spectrum of possible intelligence, which has millions or billions of levels, and may be infi- nite. I will start with a few simple examples. 3.1 Mental Arithmetic Let us consider mental arithmetic. A monkey can do simple mental arithmetic. A human child can do more mental arithmetic than a monkey. There are humans who are very good at mental arithmetic. There is a maximum level of mental arithmetic that the smartest human can do. For illustrative purposes, let us say that on the spectrum of mental arithmetic, ants are at level 0, monkeys are at level 3, and humans are at level 4. (We can suppose that humans are in a range, with the median human at level 4). Computers are already further along the mental arithmetic spectrum than humans. Let us suppose that the fastest computers today are at level 7 for mental arithmetic. The spectrum of mental arithmetic extends further than level 7. It may be infinite. 3 One example of a mental arithmetic problem would be calculating the digits of Pi. Performance on this mental arithmetic task could be measured by the number of digits of Pi a computer can calculate in a second. Performance on a factorization task could be measured by the time it takes to factorize a number, or the size of number that can be factorized in a second. In the case of calculating the digits of Pi, or factorizing numbers, the spec- trum of capabilities is long, and humans are at the low end of it. 3.2 Emotional Intelligence Consider an emotional intelligence problem: deciding who should sit next to who at a dinner, in order to maximize fun and fruitful conversation. Someone who is emotionally intelligent will have a sense of what kind of person will get on with what other kind of person, possibly based on limited interactions with the people involved. Such a person may be able to arrange a fruitful dinner party, with the right seating plan, for, say, 20 people. The dinner party problem gets harder as the input variables get more com- plex: • The number of people at the dinner • The complexity of the mental states of the people attending the dinner • The level of scrutability of people’s mental states (i.e. how easy it is to tell what someone thinks and wants) We can imagine a spectrum of emotional intelligence. Ants are at level 0; monkeys are at level 3; humans are at level 4. The spectrum of emotional intelligence is long. And plausibly humans are at the low end of it. 3.3 Creativity Consider creativity. We will look at one kind of creativity problem. Suppose there is a set of materials, and the task is to think of as many interesting structures one can create with these materials. With some wood, nails and a hammer, a monkey might be able to create a limited range of structures, just resting the pieces of wood against one another, without using the hammer. A human could create more interesting structures, using the hammer and nails. The creativity problem scales as the number of materials goes up, as well as the type of materials. As before, we can imagine that in the creativity spectrum, ants are at level 0, monkeys are at level 3, and humans are at level 4. The spectrum of creativity is plausibly long, with humans at the low end of it. 4 3.4 Linguistic Understanding Consider linguistic understanding. One problem in linguistic understanding is being able to understand multiple negatives joined together in a sentence. E.g. consider a multiple negative sentence like “we mustn’t prevent the avoidance of an absence of mission failure.” It takes some time to strip out the multiple negatives in a sentence like that to reveal the underlying meaning. The difficulty scales as the negatives in the sentence multiply. In the case of linguistic understanding, we can consider a spectrum of ability within humans. A 5 year old can, perhaps, understand a sentence with two negatives; a typical adult can understand a sentence with 5 negatives. A top 1% adult in linguistic understanding can perhaps understand a sentence with 15 double negatives. The spectrum of linguistic understanding of sentences containing negatives is long. Plausibly, humans are at the low end of the spectrum. 3.5 Causal Complexity Consider causal complexity. A monkey can understand simple causal systems, with one or two interacting elements. E.g. a monkey knows that if it knocks a log off a branch, the log will fall down. If the log falls on another monkey’s head, that monkey will complain. Complex causal systems, like an internal combustion engine, have many causally interacting parts, including feedback loops, where outputs in one part of the system serve as inputs to an earlier part of the system. The macroeconomy is a massively complex system. What price someone will pay for something depends on their preferences; and those preferences them- selves depend on other people’s preferences. This means that forecasting effects on the system from changes is hard. Consider forecasting the impact on every human on the planet by increasing the price of tea in Singapore by 1 percent. The complexity of causal systems scales with the number of causally inter- acting elements, the complexity of the feedback loops, and other dependencies. The spectrum of causal complexity is long, and plausibly, humans are at the low end of it. For the mental abilities that we have defined so far (mental arithmetic, emotional intelligence, creativity, linguistic understanding, causal complexity), there is a way of defining a spectrum of increasingly more challenging problem sets for that ability. For each spectrum of increasingly-difficult problem sets, there is a corre- sponding spectrum of problem-solving abilities: greater problem-solving abili- ties correspond to harder problem sets. And in each case, it is plausible that humans max out at the low end of the problem set spectrum, and, correspondingly, of the problem-solving spectrum. With the intelligence abilities we have defined so far, one reaction may be “that is just processing power. It is not the same as intelligence.” Or “that is 5 just memory, working memory and long-term memory. It is not the same as intelligence.” “Intelligence” is a somewhat general word. In the Hypercomplexity Ar- gument, we could replace the concept of an intelligence spectrum with more concrete concepts such as: • memory (working memory and long term memory) • processing power • the mental skills listed above, such as mental arithmetic, emotional intel- ligence, and so on. For each concept, there is a long (possibly infinite) spectrum that corre- sponds to it, and plausibly, humans are at the low end of it. In the rest of this paper, we will continue to formulate the Hypercomplexity Argument using the general notion of an intelligence spectrum. In this section, we have listed spectra where there could be beings further along than humans. A monkey, if it reflected on the ways in which humans were more intelligent than it, would use coarse-grained concepts. A monkey might think ”Humans can build bigger things than me”. A monkey would not think ”Humans are better at mental arithmetic, linguistic understanding, causal complexity, emotional intelligence, and so on.” A level 10 intelligence, reflecting on the difference between it and a human, would look at the spectra listed above, and think ”yes, those coarse-grained spectra are accurate, as far as they go; we are better at ”causal complexity” than humans, in the same way that humans are better at ”building bigger things” than monkeys (to use a thought that a monkey might have). But choose to use a set of more fine-grained, advanced, concepts, to describe the difference between us and humans. 4 Premise 2: Every level in the spectrum of in- telligence, above level 3, is equally probable to be the minimum level of intelligence re- quired to understand the fundamental laws of physics. Monkeys are at level 3 on the intelligence spectrum. Monkeys do not understand quantum mechanics. We may conclude that beings at level 3 and below do not understand the fundamental laws of the universe. Given our evidence, when it comes to levels 4 and above, there is no reason to think that one level is more likely than another level to be the one that is the minimum level required to understand the laws of the universe. 6 To make intuitions about this principle vivid, let us imagine a Cosmic In- telligence Conference. At the Cosmic Intelligence Conference, one being from every intelligence level, from level 0 to level 1 billion, attends. To keep our math simple, we are assuming that the intelligence spectrum is bounded at level 1 billion. We will consider infinite spectra later. • An ant from level 0 attends. • A monkey from level 3 attends. • A human from level 4 attends. • A superintelligence from level 10 attends. • All the way up to level 1 billion. The human, at level 4, attends the conference. She walks around in awe at the intellectual power of the beings around her. The intellectual power of the other beings is overwhelming. The being at level 500 can calculate in a second how the entire universe would have played out differently if one particle had been in microscopically different place at the time of the Big Bang. The being at level 1,000 can create a million Big Bangs in a second, and predict the trajectories of each resulting universe for billions of years in an instant. Most of the things that are talked about by the level 1,000 being are not incomprehensible to the human, any more than most of the things that humans talk about are not comprehensible to ants. There is an MC at the conference. The MC says “Welcome, everyone. We are going to do a fun exercise today. Perhaps, in moments of reflection, you have wondered about the minimum level of intelligence required to understand the fundamental laws of the universe. Well, we are all going to take our bets on the answer to this question, and then I am going to give you the answer! Please write your bet down on a piece of paper, along with your name, and put your piece of paper in the urn in front of me.” As you think about what level to put on your paper, what considerations should you factor in? How should you think about it? If it was me, I would assume that all levels from 4 and above are equally likely. I would not have any reason to prefer one level over another. I would choose at random in the range of 4 - 1,000,000,000. Suppose the MC calls out: “It is level 750,000,000!” Am I surprised by this answer? Not especially. It could have been any level like that. Suppose the MC calls out: “It is level 4!” Everyone turns to me and says “you are the lucky level! Every level below you cannot is not capable of enlightenment with regard to the fundamental laws. But you are. Your level is the special level!” I would indeed feel lucky, in the same way that I would feel lucky if I won a lottery with almost one billion tickets. 7 I would go home and tell my family “the human species is special. It is the lowest level in the spectrum of intelligence that can understand the fundamental laws of nature.” 4.1 Laws that are consistently accurate There are cases where I would reason differently, when writing down my bet at the Cosmic Intelligence Conference. Suppose that there is an alien civilization at level 10 on the intelligence scale. This alien civilization has formulated a set of laws L, and it finds that the pre- dictions from L are always accurate. Even as the alien civilization has advanced its technology over thousands of years, and thereby improved the precision of its measuring devices, the predictions from L continue to be confirmed. Furthermore, the alien civilization has surveyed the galaxy, and has studied civilizations from Level 1 to Level 20. The alien civilization notices that many civilizations from level 4 to level 20 have formulated L. No civilizations of level 3 and below have been observed to formulate L. An individual from this level 10 alien civilization is invited to attend the Cosmic Intelligence Conference. Like many at the conference, the individual from level 10 is over-awed by the intellectual power on display. When the MC says “write down your bet for the minimum level of intel- ligence required to understand the fundamental laws of physics ”, she reasons as follows “I think level 4 may the correct answer. First of all, the set of laws in L look correct to me. Many civilizations I have studied have independently discovered L. Level 4 is the lowest level on the intelligence spectrum to have discovered L. I am not certain, of course, and I could be wildly off, but my guess is that level 4 is the minimum level of intelligence required to understand the laws of physics.” The alien’s confidence would increase as it observes more civilizations that have independently formulated L, and finds evidence that civilizations below level 4 have not formulated L. 5 Recap Recall the formal argument from above. 1. Humans are at the low end of the spectrum of possible intelligence, which has millions or billions of levels, and may be infinite. 2. Every level in the spectrum of intelligence, above level 3, is equally prob- able to be the minimum level of intelligence required to understand the fundamental laws of physics. 3. If premises 1 and 2 are true, then humans are not likely to be close to the minimum level of intelligence required to understand the fundamental laws of physics. 8 4. Humans are not likely to be close to the minimum level of intelligence required to understand the fundamental laws of physics. We have reviewed the reasons for premise 1 and 2. Premise 3 is established by calculating the probability that level 4 (or a nearby level) is the minimum level of intelligence required to understand the fundamental laws of physics. If the intelligence spectrum contains one billion levels, and every level above 3 is equally probable to be the minimum level, then the chance that any given level above 3 is the minimum level is 1 / (1,000,000,000 - 3) = 1 / 999,999,997. Let us now consider some ways of responding to this argument. 6 Response 1: Humans can collaborate, and make amazing things No single human can create a pencil from raw materials alone (if the human has to create all the necessary tools and machinery themselves from naturally occurring resources). 8 billion humans collaborate via a global economy. As a result of this global collaboration, humans can create things like computers, another example of something that no single human can develop from scratch. Because humans can collaborate, and build such advanced technology, hu- mans can build tools that measure the paths of things like photons. Using these tools, humans can create experiments like the double slit experiment. Re- sults from the double slit experiment inspire humans to formulate the laws of quantum mechanics. If every human worked independently, pursuing a subsistence lifestyle, it is unlikely that any human would formulate the laws of quantum mechanics. By collaborating, humanity augments its capabilities, and solves problems that no single human could solve. The power of collaboration is brought up by Nick Bostrom, in his thoughts on the Hypercomplexity Argument (private correspondence): ”The laws of physics however might be quite simple (a simplicity prior would assign this a high probability). If so, there is a limited search space of possi- bilities, and we might well be capable of eventually locating within this space the correct theory and understanding it (in whatever sense we understand e.g. Einstein’s field equations). I don’t think it’d be that surprising if the first level of intelligence that can formulate your paper is also the first intelligence able to conduct a systematic search in the space of simple mathematical formulas and device experiments to test them - the capabilities seem roughly comparable. You get various limited computing systems, and then you get Turing complete systems that are all equivalent, although some are much faster than others. Our intelligence seems to be like that - we’re smart enough that perhaps over a few hundred generations we develop physics and figure out the correct theory, but far from smart to do it individually in a single lifespan, starting form scratch. 9 So you might say, yes an individual human is far from the level of intelligence that can develop an understanding of the laws of physics within a single lifetime - so there is no coincidence there - but our progress compounds, and the requisite level of cumulative efforts happens to be around that of 100B human brain lifetimes (or maybe a 1M if we only count those more directly working on the problem). Once the theory has been found, if it is fairly simple, which the simplicity prior suggests is fairly likely, it can be understood by any being who can understand fairly simple theories, which we already know that we can: and probably an anthropic argument would then account for why we should not be surprised to find ourselves being the kind of mind that is capable of understanding fairly simple theories (cf. Anthropic Bias: Observation Selection Effects in Science and Philosophy).” Individual humans are at level 4 on the intelligence spectrum, but the human civilization, collaborating as a group, is equivalent in problem-solving ability to a being at a higher intelligence level. Perhaps the human civilization, working together in a group of 8 billion, is capable of solving problems that a single level 6 intelligence can solve. Level 4 is as far from humans as humans are from ants. We can construct a toy model where we choose a value of x, such that a group of billions of collaborating intelligences is x levels higher on the intelligence spectrum than any individual member of that group, working alone. In our toy model, let us assume that x = 4. Therefore, we will assume that: • A group of billions of level 4s, working together, can solve problems that a single level 8 intelligence can solve. • A group of billions of level 50s, working together, can solve problems that a single level 54 can solve. • A group of billions of level 1,000,000s, working together, can solve prob- lems that a single level 1,000,004 can solve. In reality, the returns from collaboration within a group would depend based on a number of factors, which would vary from species to species. The returns from collaboration may be linear, sub-linear, or super-linear. In our toy model, where collaboration shifts a species up the intelligence spectrum by 4 levels, the overall probabilities associated with the Hypercom- plexity Argument do not materially change. Humans, whether working together or not, are still low on the spectrum of possible intelligence: level 8 if they are working together, out of a billion levels. 7 Response 2: Humans can merge with comput- ers, and scale the intelligence spectrum Humans already use tools to augment their intelligence. A computer is one such tool. In artificial intelligence, people wonder about an intelligence explosion: this is where an AI is intelligent enough to re-write its own software, and make 10 itself more intelligent. A level 4 AI can create a level 5 AI; a level 5 AI can create a level 6 AI. And so on, indefinitely. Perhaps humans could merge with the AIs, so they themselves participate in this intelligence explosion. Suppose the minimum level of intelligence required to understand the laws of physics is 500,000,000. In an intelligent explosion scenario, perhaps human-AI hybrids reach that level in a thousand years. Humans may well merge with AIs, and this may well augment their intel- ligence. However, the chance that this will lead to humans augmenting their intelligence sufficiently to understand the laws of physics is slim. The argument for this is similar in structure to the Hypercomplexity Argu- ment: 1. Let us ask: what is the minimum level of intelligence required to create an intelligence explosion that can ultimately lead to the ability to the understand the laws of physics? 2. We know that monkeys could not create an intelligence explosion leading to the ability to understand the laws of physics. 3. Humans are at the low end of possible intelligence, out of millions or billions of levels. 4. Every level above level 3 seems equally likely to be the one that can create an intelligence explosion that can scale itself to the level required to understand the laws of physics. 5. The chance that level 4 is the minimum level that can create an intelligence explosion that allows is to scale to the level that can understand the laws of physics is low. Someone might say “I want to push back on premise 4. I don’t know about whether we are close to the laws of physics, but I do know that humans are exceptional at tool-building. I can see a case for thinking that level 4 is the break-point in the intelligence spectrum, where level 4 can create intelligence ex- plosion events (enough to scale to being able to understand the laws of physics), and level 3 and below cannot.” To make premise 4 vivid, we can imagine the Cosmic Intelligence Conference again, with one representative from every intelligence level, from level 0 to level 1 billion. This time the MC poses the following prompt “We are going to take our bets again. This time the question is “what is the minimum level of intelligence required to create an intelligence explosion event that can scale a species to the intelligence level required to understand the laws of physics?” Please take your bets.” As we write down our bets, how should we reason? If it was me, I would assume that every level, above 4, is equally likely. I would reason as follows “Merging with AI will definitely augment our intelli- gence. If the level required to understand the laws of physics was level 6, then 11 I’m confident that merging with AI can help us scale to that level. For exam- ple, in certain areas, such as mental arithmetic, computers are probably already above level 6. However, if the level required to understand the laws of physics is closer to 500,000,000, then I would need to believe in the intelligence explosion process working hundreds of thousands of times in a row: a level 4 intelligence creating a level 5; a level 5 creating a level 6, and all the way up to 500,000,000. Solving intelligence bottlenecks is probably hard. Expecting level n intelligence to be able to create a level n+1 intelligence, hundreds of millions times in a row, is unlikely.” With this reasoning in mind, for my bet, I would choose at random between level 4 and level 1 billion. 8 Response 3: A superintelligence could dis- cover the fundamental laws and explain them to us It took a genius-level intellect to discover calculus, but now calculus is taught in schools. Discovering something is strictly harder than understanding it. Dis- covering a proposition requires both discovering it and understanding it; under- standing a proposition is a subset of discovering something. This insight does not affect the Hypercomplexity Argument, which is already phrased in terms of understanding, not discovery. This insight would suggest that if the minimum threshold of intelligence required to understand the laws of physics was 500,000,000, then the minimum threshold of intelligence required to discover the laws of physics may be slightly higher than that. 9 Response 4: It’s a mistake to think that hu- mans are only 4 levels of intelligence above ants In the description of the intelligence spectrum, we said that ants are at level 0; monkeys are at level 4, and humans are at level 4. According to this response, this description under-sells human achievements in relation to ants. We should think of ants as at level 0, and humans are a level 1 billion. And if the top of the intelligence spectrum is, say, 2 billion, then humans are indeed relatively close to the maximum possible intelligence, and the probabilities are favorable that humans are above the minimum threshold for understanding the laws of physics. Premise 2 of the Hypercomplexity Argument says: “Humans are at the low end of the spectrum of possible intelligence, which has millions or billions of levels, and may be infinite.” We can think of humans as being very far from ants on the intelligence 12 spectrum. However, the examples above from mental arithmetic, creativity, linguistic understanding suggest that the spectrum of intelligence is arbitrarily big, and may well be infinite. 10 Response 5: Perhaps we can understand the rules that govern the universe, even if we can’t compute every consequence of those rules In the case of the mental arithmetic spectrum we considered above, arguably we understand some techniques for calculating the digits of Pi. There is just a limit to how many digits we can compute in a minute. Similarly, we understand some techniques for factorizing numbers, but it gets harder to apply these rules as the numbers get bigger. Similarly, someone might argue: perhaps humans are capable of understand- ing the rules that govern the universe, even if we are not capable of computing, or understanding, every consequence of those rules. For example, perhaps the laws of the universe are simple enough to understand; and suppose that the laws, plus the starting position of the particles, determine the position of every particle in the universe. We would not be able to compute the position of every particle. But we might still understand the underlying laws. It is indeed possible that the laws of physics are simple enough for humans to understand them. However, the thought experiment of the Cosmic Intelligence Conference is intended to illustrate that this is unlikely. The assumption that every intelligence level above 3 is equally likely to be the minimum threshold required to understand the laws of physics yields a low probability that level 4 is the minimum threshold. 11 Response 6: We could use abbreviations to understand the laws of physics Rory Madden asked me: ”Is the hypothesis that fundamental laws of physics would contain single sentences too long for a single human to parse? Can’t we use abbreviations, division of labour . . . ?” If we were to list the reasons that a monkey cannot understand quantum mechanics, those reasons would include: mathematical complexity; causal com- plexity, amongst other factors. However, a monkey would not be able to hypoth- esize why it is unable to understand quantum mechanics: it lacks the brainpower to understand the laws, as well as to hypothesize what it is about those laws that makes it hard for a monkey to understand them. My guess is that humans, by the same token, are not able to understand the laws of physics; and are not able to hypothesize what feature of the laws of 13 physics might make them hard for a human to understand. 12 Discussion 12.1 Humans Are Not Special Humans have often thought of themselves as special, in some way or other: • Humans thought that the sun and the stars revolved around the earth. • In relation to the animal kingdom, humans thought that they had souls, but animals didn’t. These assumptions turned out to be wrong. Humans might like to think that they are close to figuring out the laws of physics, while all other species could not accomplish this task. Believing this would require believing either that humans are near the top of the spectrum of possible intelligence, say level 4 out of 6 levels. Or it would re- quire believing that level 4 is special. There are a billion levels in the intelligence spectrum, but level 4 is the special one. When one thinks of humans as at the low end of possible intelligence, one starts to think of the universe differently. Presumably, a low-intelligence crea- ture has only figured out a few basic principles about their local environment, in the same way that an orca has figured out a few basic principles about its environment. Instead of this being surprising, and a contrarian stance, it seems a natu- ral stance. If we take seriously the premise that humans are low-intelligence creatures, the idea that humans have unlocked, or are capable of unlocking, the secrets of the universe seems extravagant by comparison. 12.2 The Hypercomplexity Argument applies to other fields that make universal claims 12.2.1 Math We can ask ”what is the minimum level of intelligence required to understand the principles of mathematics?” We could phrase the question either in terms of the axioms, the theorems, of the rules of deductive logic that connect axioms with theorems. We know that a monkey, or an orca, cannot understand the fundamental principles of math. A monkey cannot wonder whether Fermat’s Last Theorem is true. The minimum level of intelligence required to understand the fundamental principles of math is between level 4 and the end of the intelligence spectrum (if there is an end). Each level seems equally probable. Therefore, the chance that level 4 is the minimum level is 1 / (the length of the spectrum - 3). 14 12.2.2 Ethics If ethics is objective, then ethical claims apply across the universe (and are probably necessary too). We may ask: what is the minimum level of intelligence required to understand the principles of ethics? A common methodology for formulating ethical principles is to start with some principles that we accept as foundational, and then to draw out the con- sequences of those principles. For example: • Consequentialism is based on the principle that the ethical value of any act is based on the consequences of that act: how much value or disvalue the act creates. • Kantian deontology is based on principles such as: ”Only do those things that it would be fine if everyone did them.” • Virtue ethics is based on principles such as ”do what a virtuous person would do.” There are also ethical approaches based on analyzing our intuitions about Trolley Problem cases, and other ethical dilemmas, and formulating principles that explain those intuitions. A methodological assumption in these approaches to ethics is to identify an ethical principle that sounds reasonable to humans, and then to extrapolate the consequences of that principle. The chance that a level 500,000,000 intelligence reasons from similar ethical principles to humans seems low. But we are not in a position to hypothesize what kinds of ethical principles such an intelligence would adopt. 13 Related Work In ”What can we know about that which we cannot even imagine?”, the physicist David Wolpert writes: ”I am concerned with — and I wish to draw attention to — the issue of whether there are cognitive constructs that we cannot conceive of but that are as crucial to understanding physical reality as is the simple construct of a question. The paramecium cannot even conceive of the cognitive construct of a “question” in the first place, never mind formulate or answer a question; are there, similarly, cognitive constructs that we cannot conceive of, but that are just as necessary to knowing all of physical reality as is the simple idea of questions and answers? I am emphasizing the possibility of things that are knowable, but not to us, because we are not capable of conceiving of that kind of knowledge in the first place.” (Wolpert, 2023, https://arxiv.org/pdf/2208.03886.pdf) Thomas Nagel makes similar remarks in ”The View From Nowhere” ”It certainly seems that I can believe that reality extends beyond the reach of possible human thought, since this would be closely analogous to something 15 which is not only possibly but actually the case. There are plenty of ordinary human beings who constitutionally lack the capacity to conceive of some of the things that others know about. People blind or deaf from birth cannot under- stand colors or sounds. People with a permanent mental age of nine cannot come to understand Maxwell’s equations or the general theory of relativity or Godel’s theorem. These are all humans, but we could equally well imagine a species for whom these characteristics were normal, able to think and know about the world in certain respects, but not in all. Such people could have a language, and might be similar enough to us so that their language was translatable into part of ours. ”If there could be people like that coexisting with us, there could also be such people if we did not exist-that is, if there were no one capable of conceiving of these things that they cannot understand. Then their position in the world would be analogous to the one which I have claimed we are probably in. ”We can elaborate the analogy by imagining first that there are higher be- ings, related to us as we are related to the nine-year-olds, and capable of under- standing aspects of the world that are beyond our comprehension. Then they would be able to say of us, as we can say of the others, that there are certain things about the world that we cannot even conceive. ” (Nagel, 1986, The View From Nowhere, p95) 13.1 Mysterianism Colin McGinn has argued that our brains are not capable of solving the mind- body problem. He writes: ”We have been trying for a long time to solve the mind-body problem. It has stubbornly resisted our best efforts. The mystery persists. I think the time has come to admit candidly that we cannot resolve the mystery.” (”Can we solve the Mind–Body problem?” Colin McGinn, 1989) The above papers wonder whether the world is more complex than we think. I have not found papers that argue for that conclusion. 14 Conclusion: The universe is more weird and wonderful than our level 4 human brains are capable of imagining The conclusion of the coincidence argument is that it is likely that a level 4 intelligence (humans) is not able to understand the fundamental laws of the universe: the laws of physics, math, ethics, or any other domain. Humans are not even close. To me, the “we are not even close” viewpoint is more exhilarating than disappointing. The universe is more weird and wonderful than our level 4 human brains are capable of imagining. All the discoveries we are capable of making are just scratching the surface of reality. 16 The flip-side of this, however, is that any discovery we make is just a local one, like an orca discovering a feature of its environment. Unless we augment our intelligence dramatically, our discoveries will remain as far from engaging with the fundamental laws of the universe as the discoveries of an orca. 15 Acknowledgements Thank you to Stephen Kearns, Nick Bostrom, John Hawthorne and Rory Mad- den for comments. 17
US