Stephen Wolfram
[From PopTech] I’m supposed to blog an hour with Wolfram? Ay caramba!
I’m going to write some general comments, and then I’ll post my running notes.
Comments
I haven’t read Wolfram’s book and I am in no position to evaluate the truth or usefulness of what he said.
I hear what he says through a couple of filters. His general thesis – that structures as complex as the universe itself can be generated from incredibly simple rules – resonates. It’s the basic claim of chaos theory. And, for me it helps get around my lifelong discomfort with the nature of scientific laws. The idea that the universe is governed by laws is too clearly an application of the governance paradigm to the physical universe. And, while Wolfram’s theory gets us past this, in the same way, of course, Wolfram applies the computer paradigm to the universe. And the fact that his paradigm maps to the paradigm of current technology isn’t just a coincidence.
Wolfram’s presentation was surprisingly clear. I followed more than I’d thought, although I certainly got lost as he went on. Unfortunately, I got lost as he got more and more interesting. I hate when that happens.
Ultimately, of course, the question is the extent to which the rules describe the universe or generated the universe. Not having read the book, I strongly suspect the answer is that the question is phrased entirely wrong. I’m definitely gonna buy the book and pretend to read it.
Anyway, on to the running notes…
[Ernie the Attorney‘s take on Wolfram is very funny.]
Notes…
John Benditt began by summarizing Stephen Wolfram’s idea: “The entire universe is the output of an algorithm the size of a four or five line computer program.”
Wolfram physically looks a bit like Jason Alexander, but that’s pretty much where the similarity ends. He’s British and, of course, some type of genius.
He came to his idea while writing programs that try to break down into primitives the things humans want to do (e.g., Mathematica). Suppose you could do the same for nature. What kinds of computer programs might be relevant? From writing mathematical programs, he thought it would have to be quite complex. But suppose you look at very simple programs, one line of code, even chosen at random. Pick the simplest programs and see what they do. So, he looked at “cellular automata.” A simple starting point and a simple rule can create complex patterns.
There are 256 simple (8-bit) ceullular automata, so he decided to look at all of them. With rule 30, truly random patterns result. Very simple things go in and very complex things come out, which is against our normal intuition.
So, he decided to point this new “telescope” at other phenomena. The same behavior occurs in a “vast array of systems.” It wasn’t noticed before because you need computers “and tools like Mathematica,” and because it goes against our intuition.
Why does the phenomenon happen? You need a new conceptual framework to explain that. All natural processes can be viewed as computations. Sometimes you know what the output will be ahead of time, e.g., with cellular automaton designed to do squares or to find primes. But there can be universal cellular automata that emulate other, dedicated automata, by being given different input.
The principle of computational equivalence: Any system whose behavior doesn’t look obviously simple to us will turn out to be performing a computation similar to any other.” [I may have blown that. The pool is getting over my head.] That is, if you look at a system with only simple rules, it will show behavior that’s simple and regular. But if you make the rules for the system just a tiny bit more complicated, you jump to having a system that is as sophisticated as any other.
This principle yields predictions: A system like this should be able to do universal computation. And it can.
You wouldn’t expect to find this in nature since human-made universal computers are highly complex. It suggests that there should be lots of systems in nature capable of sophisticated computation.
This explains why Cellular Automaton #30 looks complicated to us. Imagine a system and a observer who’s trying to decode what the system is doing. The PCE says that in many cases, the behavior of the system will be as complex as the systems inside the observer. That’s why #30 seems complex. This leads (somehow) to the Principle Computational Irreducibility. E.g., we can figure out where the earth will be in its orbit 1M years from now just by plugging nubers into a formula. But in some cases, the only way to work out will happen is to run the system, to do the experiment. That defines a limit from what one can expect to get from science.
For example: “The Weather has a mind of its own.” The PCE says there’s some sense to this in that fluid turbulence in the atmospher is doing as sophisticated a calculation as what’s hapening in our minds.
Q&A with John Benditt
Q: You postulate that there is a rule for the universe itself. That seems preposterous because the universe is enormously complex. Defend yourself.
A: I might not believe that had I not seen all that the programs I was studying could do. Physics gets more complex the smaller the object of study gets. But that doesn’t have to be the case. A very simple program might be able to produce all the complexity.
What might that program might be like? If the program is small, then the things immediately visible in our universe can’t be visible in that program. Also, there has to be as little as possible built into that program. Cellular automata already have too much built in: it has the notion of cells arranged in space and that the color of the cell is different from the cell itself. In the end, one doesn’t need anything except space. [This is so similar to Hegel’s Logic which generates the universe simply from Being. “Sein. Reine Sein.” and we’re off and running.] I ultimately suspect one doesn’t need anything more than pure space to generate the universe.
But what is space? In traditional science you don’t get to ask that question. But my guess is that space ultimately is a collection of discrete points and all we know is how those points are connected to other points.
Q: Isn’t this at odd with common sense and 300 years of science?
A: Yes. Newton and Einstein both see space as a background without its own properties. Einstein explored the idea that space is all there is later in his life. Space is a collection of nodes where every node is connected to three others.
So how does time work? Traditionally, time has simply been another dimension. But when you think about programs, time operates very differently than space. I think time is much closer to programs. For cellular automata, every cell gets incremented in sync. But there’s probably no universal clock. So, maybe only one place in the universe gets updated at a time. It seems simultaneous because until I get updated, I can’t tell if you’ve been updated. Some known features of physics can be explained this way (e.g., relativity).
“What’s encouraging is that from so little one gets out so much.” So, if we go all the way, we may be able to define the universe in one small program. “It won’t be as exciting as one might think because when the universe ran this program, it took 10B years to run the program.” And the Principle of Computational Irreducibility means that we can’t catch up: you have to actually run the program.
Q: What is the experimental program that will let us find this program?
A: The core of my new science is a type of abstract science. If the rule is simple enough, we could just search for it. Search through the simplest one trillion rules. Some will be promising but, e.g., won’t have time. Many elaborate tools need to be buit.
Questions from the audience
Q: Is this falsifiable?
A: The core of what I’ve tried to do is more like mathematics than natural science. Falsifiability isn’t that relevant for mathematics. Math is tested on whether it’s useful in modeling the physical world. He expects there will be thousands of papers in ten years proposing very simple rules. Wolfram himself has proposed some models for fluid turbulence that are surprising.
Q: What effects does your thinking have on fields like philosophy.
A: There hasn’t been much time for people to integrate it into other fields. But his book does talk philosophy and is already making an impact on philosophy, e.g., the effect of computational irreducibility has implications for Free Will.
Q: What about the size of the initial conditions? In order to get universality, you can’t start with one bit on. What’s the number you need? Randomness is the most complex thing. When you come across complexity, you may be looking at it wrong and there may be a simpler way of looking at it. E.g., fractals may be complex and beautiful but result from a single line program.
A: The idea is that you can characterize the complexity of an entity only by looking at the complexity of the program that generated it.
Q: Count the amount of computer time you dissipate, not just the initial state, you can get complexity.
Q: You show that we see simple patterns at various scales.
A: Complex issue. Most CA are not on all the same scales. (Fractals are.) …[and here my attention and understanding ended]
Categories: Uncategorized dw
Hi David!
Stanko Blatnik in Velenje, Slovenia showed me this cool simulation/application of cellular automaton for stress tests (see what happens when the Golden Gate Bridge gets hit by a helicopter). This new method works great for materials that crack and therefore arent handled well by grid computational methods. The institute in Tomsk, Siberia wants to apply this to help designing materials and structures that are more earthquake proof for the Bay Area, etc. Who might be good contacts for them?
http://www.tu-berlin.de/sfbs/sfb605/mca_method/
http://www.primarilypublicdomain.org/post/ Andrius
Andrius, good to hear from you. Wolfram talks about cracks in NKS; it’s cool to see it applied to something real. Unfortunately, though, I don’t know of anyone doing anything in this regard, other than the guy you mention. If I hear of someone, I’ll let you know.