It's not every day that we come across a study that tries to rewrite the rules of reality.
However, a
physics professor at the University of Minnesota Duluth named Vitaly Vanchurin
attempted to reframe reality in a particularly eye-opening way in a preprintpublished on arXiv this summer, arguing that we're living inside a huge neural
network that governs everything around us.
In other
words, it's a "possibility that the entire cosmos on its most fundamental
level is a neural network," he stated in the report.
Physicists
have been attempting to reconcile quantum physics and general relativity for
years. The first contends that time is universal and absolute, whereas the
second contends that time is relative and intertwined with the fabric of
space-time.
Vanchurin claims that artificial neural networks can "display approximate behaviours" of both universal theories in his article. "It is widely believed that on the most fundamental level, the entire universe is governed by the rules of quantum mechanics, and even gravity should somehow emerge from it," he writes, because quantum mechanics "is a remarkably successful paradigm for modelling physical phenomena on a wide range of scales."
The paper's discussion states, "We are not merely claiming that artificial neural networks can be beneficial for evaluating physical systems or uncovering physical laws; we are saying that this is how the world around us actually works." "In this regard, it may be viewed as a proposal for a theory of everything, and as such, it should be simple to disprove."
Most
physicists and machine learning experts we reached out to declined to comment
on the record, citing their mistrust of the paper's results. However, in a
Q&A with Futurism, Vanchurin delved deeper into the debate – and revealed
more about his concept.
Futurism:
Your paper argues that the universe might fundamentally be a neural network.
How would you explain your reasoning to someone who didn’t know very much about
neural networks or physics?
Vitaly Vanchurin: There are two ways to answer your question.
The first
approach is to start with a detailed model of neural networks and then
investigate how the network behaves when there are a high number of neurons.
What I've proved is that quantum mechanics equations accurately describe the
behaviour of a system near equilibrium, while classical mechanics equations
accurately describe the behaviour of a system further away from equilibrium.
Coincidence? Maybe, but quantum and classical mechanics are exactly how the
physical world works, as far as we know.
The second
option is to begin with physics. We know quantum mechanics works well on small
sizes and general relativity works well on big scales, but we have yet to find
a way to bring the two theories together in a coherent framework. This is known
as the quantum gravity problem. Clearly, something major is lacking, but to
make matters worse, we have no idea how to deal with observers. In quantum
mechanics, this is known as the measurement problem, and in cosmology, it is
known as the measure problem.
Then one
may argue that quantum mechanics, general relativity, and observers are the
three phenomena that need to be united, not two. The main one, according to 99
percent of physicists, is quantum mechanics, and everything else should arise
from it in some way, but no one understands how that can be done. In this
paper, I propose another possibility: that a microscopic neural network is the
underlying structure from which everything else emerges, including quantum
mechanics, general relativity, and macroscopic observers. So far, everything
appears to be going well.
What first
gave you this idea?
To begin,
I simply wanted to learn more about deep learning, therefore I wrote a paper
titled "Towards a Theory of Machine Learning." The original plan was
to use statistical mechanics methods to analyse the behaviour of neural
networks, but it turned out that the learning (or training) dynamics of neural
networks are remarkably similar to quantum dynamics seen in physics under
certain limitations. I was on sabbatical leave at the time (and still am) and
decided to investigate the theory that the physical world is actually a neural
network. The concept is insane, but is it insane enough to be true? That will
have to wait and see.
"All
that is needed to verify the theory is to uncover a physical phenomenon which
cannot be explained by neural networks," you said in the paper. What
exactly do you mean? Why is it "easier said than done" in this case?
Well,
there are a lot many "theories of everything," and the vast majority
of them must be incorrect. Everything you see around you is a neural network in
my hypothesis, thus all you have to do to disprove it is find a phenomenon that
cannot be explained with a neural network. But, when you think about it, it's a
very difficult undertaking, owing to our lack of understanding of how neural
networks function and how machine learning works. That is why I attempted to
construct a machine learning theory in the first place.
The
concept is insane, but is it insane enough to be true? That will have to wait
and see.
1 Comments
Yes
ReplyDelete