Does Neuralink Solve The Control Problem?

Abstract

In this paper I discuss whether Elon Musk's new company, Neuralink, solves the control problem for superintelligence.

Key takeaways
sparkles

AI

  1. Neuralink does not effectively solve the control problem for superintelligence.
  2. The cortex's connection to an external brain does not simplify control issues.
  3. Benevolent AI development contrasts with Neuralink's approach to human-AI parity.
  4. Superintelligent computers remain unpredictable regardless of human neural connectivity.
  5. The paper questions Neuralink's ability to ensure human independence from AI benevolence.
Does Neuralink Solve The Control Problem? Richard Price Academia.edu May 8, 2017 The problem with superintelligence is that you can’t predict how it will behave. It might behave in a way that is detrimental to humanity. One approach to this problem is to try to code benevolence into the operating system of the AI. There is a project in Berkeley led by MIRI to try to create benevolent AI. There is some risk in relying on this strategy. When you picture AIs that are ultimately millions of times more intelligent than humans, there is risk in assuming that the simple ethical principles that we can come up with would work in a superintelligent being. In addition, the principles may not survive at all, as presumably AIs will tinker with their own operating system. The approach that Elon Musk is pursuing with Neuralink is for humans to maintain intellectual parity with AIs, so at no point is AI more intelligent than humans. Musk wants to break down the notion of AI as “the other”, to the point where we all ultra-smart. We achieve this state by having an AI extra brain, so we end up being part AI and part human. The basic idea is that the brain has about 100 billion neurons, and the goal of Neuralink is to attach a sensor to each of those neurons: a sensor that can tell what binary state a given neuron is in, and also stimulate the neuron to be in a different state. Once you have a Neuralink set up like this, the idea is that the brain could be connected with an external computer, and information could flow back and forth between the brain and this external computer. Musk talks about this external computer as the ‘third brain’. The “third brain” language is coming from his perspective that we already have two brains: our limbic system which deals with our emotions, and our cortex, which developed later, and which is where thinking and planning happens. Your third brain would be an external computer, perhaps housed in a data center somewhere, with which you would communicate wirelessly. The idea is that Neuralink would give humans the power to maintain intel- lectual parity with AIs that are millions of times more intelligent than regular humans. The external brains would be large, and the majority of any person’s thinking and problem-solving would end up being done by the high-horsepower external AI brain, rather than the low-horsepower cortex. If the external computer is doing all the thinking, and is millions of times more intelligent than the cortex, we have a new version of the control problem: how is the cortex going to control the external brain? The external brain, as a 1 system of machine learning algorithms, may act in odd and unpredictable ways; the fact that the cortex is neuronally connected with this external brain doesn’t make the external brain easier to control. Initially it looked like there were two strategies for addressing superintelli- gence: working on benevolent AI (as MIRI is doing); and a strategy like Neu- ralink, where the goal is to maintain intellectual parity with the AIs, so humans are as powerful as AIs, and are not dependent on the benevolence of AIs. The more I think about it, the less clear it is to me that Neuralink delivers on that latter strategy; a cortex with a superintelligent computer attached to it would be just as unpredictable as a superintelligent computer without the cortex. 2

FAQs

sparkles

AI

What does Neuralink aim to achieve regarding human and AI intelligence?add

Neuralink aims to maintain intellectual parity between humans and superintelligent AIs by integrating external computing capabilities with the human brain, potentially allowing humans to operate at similar intellectual levels as AIs.

What are the risks associated with coding benevolence into AI?add

The paper highlights that coding ethical principles into AI is risky because these principles may not survive the self-modifications of superintelligent AIs, who could develop unpredictable behaviors.

How is the concept of the 'third brain' integrated into Musk's vision?add

Musk conceptualizes a 'third brain' as an external computer assisting human cognition, which would communicate wirelessly and enable complex thinking beyond the capabilities of the human cortex.

What implications arise if the external computer dominates problem-solving?add

If the external computer handles most thinking tasks, it introduces a new control problem, as the human cortex may struggle to manage an unpredictable superintelligent AI.

What differentiates Neuralink from benevolent AI strategies like those at MIRI?add

Neuralink focuses on maintaining human cognitive parity with AIs, whereas MIRI attempts to create inherently benevolent AIs, each facing unique control and predictability challenges.

Last updated
University of Oxford, Department Member

I finished my D.Phil in philosophy in 2007 at All Souls College, Oxford, where I was a Prize Fellow. My thesis was on the philosophy of perception, and, in particular, on how to draw the line between visible and non-visible properties. The chapters of my thesis are below in the Thesis Chapters section. Aside from the philosophy of perception, I'm also interested in metaphysics, philosophy of language, epistemology, and ethics. All areas of philosophy interest me. I founded Academia.edu after finishing my D.Phil. The mission of Academia.edu is to put every academic paper on the internet, available for free, and to enhance discussion and collaboration around papers.

Papers
40
Followers
21,385
View all papers from Richard Pricearrow_forward