[NKS] Wolfram: How it works
Wolfram is explaining the argument of his book. My blogging this is not going to be more helpful than reading more considered expositions, including Wolfram’s own. So, I plan on only jotting down some stray notes and thoughts.
Trivia: A cellular automaton (CA) gets its number by converting the pattern of bits that express the rule of the CA into a decimal number.
Wolfram shows how simple rules can lead to great complexity in simple CA. He asks how typical this is in the computational world. Is it only CA that generate complexity from simple rules? CA seem special (everything updates at the same time, there are local rules, etc.) so is this generalizable? So Wolfram systematically removes each of the CA’s special features. E.g., what happens if you don’t update all the cells at the same time? So he looks at some variations (substitution systems, replacement systems) and finds that they too can generate complexity from simple rules.
Tag systems: Look at the first element of a string, chop it off, and add different strings at the end based on the color of the first element. If you chop off one element, you get nested patterns. If you chop off two or more, you get much more complex behavior. In cyclic tag systems at alternating steps you add cells or not depending on the step number. [Aha! In the book tag systems turn out to be crucial in explaining the universality of CA 110, and I didn’t understand them until now.]
[It’s 11:05 and he’s lost me. Too much math.]
He’s finding complex patterns in multidimensional systems and networked systems evolving through time, all part of his argument that simple-makes-complex isn’t an artifact of CA.
Nor is it dependent on the complexity of initial conditions. [This I believe is part of Wolfram’s radicalization of formalism: initial conditions are a type of contingency and having to rely on initial conditions would mean that the system isn’t entirely formal and mathematical.]
[He’s talking about constraints. I’m lost again. But he bounces up a level and says the point is that you can force constraints to yield complexity. This apparently has application to the nature of crystals.]
He’s finding the same simplicity-yields-complexity in arithematic and math. Part of his point is that it’s a property not merely of an artifical construct like a CA. But I’m not sure if he’s re-pounding the same nail or whether he’s finding important insights within each area he’s discussing.
Now he’s talking about CA in which cells aren’t only black or white but could be any shade of gray. Guess what? Simple rules yield complexity. And it’s true of differential equations also.
He’s summarizing: Each of these types of systems comes up over and over again so it’s worth understanding something about how they work [A plea for a science of computation]. We’ve seen over and over again that simple rules can bring about great complexity. As soon you pass a very low threshhold of complexity of the initial rules you get all sorts of wild results.
Next topic: Analyzing what simple programs do. What can you do to analyze a hugely complex CA-generated pattern. The end of the story is that there isn’t a way to crack something like that. But what does “crack” mean? Can we go from the sequence back to the rule and initial condition that generated it? Nope. When we say something has regularities, we mean we can summarize it more briefly than by just repeating the sequence. [E.g., “It’s a checkerboard” is a lot shorter than listing all 64 squares.] Can we compress some of the complex CA? Nah. Run-length encoding doesn’ work. Block-based compresssion doesn’t work. Dictionary-based? Nope.
We say something is random if we can’t summarize it. When we call something complex, not only can’t we find a unique summary but we can’t find a summary of the properties we actually care about.
Wolfram is endorsing simply looking at patterns. With our eyes. Our visual systems are quite good at discerning patterns. How does our visual system do that? What kind of things will our visual system be able to disentangle? It’s very good at noticing repetition. It’s ok at noticing nesting where there are big blocks of repeating color.
Application to cryptography.
Summary of this part of the talk: One’s assumption when confronted with something like Rule 30 is to say that there’s got to be some pattern of regularity hidden there. But he’s tried lots of ways of “cracking” it and none work.
Summary of the talk overall: These are the sorts of issues that pure NKS talks about.
After lunch he’s going to talk about more technical approaches. Bad news for the likes of me.
Categories: Uncategorized dw