Home > Uncategorized > A Little More Background

A Little More Background

In my recent post, A Little Background, I started to try to explain how one might go about building networks that looked like smooth structures with integer dimension at large scales. In other words, networks that might look something like the empty space our planet sails through.

At the end of that post, I set a puzzle in which a group of party attendees who could only communicate via cellphone tried to organize themselves into a ring. I did a  horrible job of making the details clear, but one of the fine commenters on this blog (Keir) solved it anyway. Hoorah for commenters!

Along the way, Keir remarked on the key point that I was hoping people would find. And that is this: when you’re trying to build a network that has some global property like the friendly geometry of space, it’s fairly easy to do if you add nodes to the network one at a time. It’s usually impossible if you try to acquire smoothness some other way, like, for instance, starting with a random network and telling it how to untangle itself.

This is true whenever you insist that the information available to each node in the network be local. In other words, so long as a node can only gather information about its neighbors, or even its neighbors’ neighbors’ neighbors, it won’t ever know enough to untangle its position relative to all the others. Bits of your network will always be breaking off. And even if you put in tricks to prevent the network from coming apart, it will retain twists and knots that stop it from ever sorting itself out (except perhaps in 1D).

This is interesting, and, to my mind, suggestive, because it says that if the universe is discrete, and if it started from simple initial conditions, then there should be ample evidence that it used to be tiny and got a lot bigger. And this, of course, is what we see. This doesn’t prove anything about the discreteness of spacetime, put it’s nice to know that reality and our simple simulation tools line up.

However, while it’s one thing to get drunk partygoers with cellphones to form an electronic conga-line, it’s trickier to get them to form a network that looks like a flat surface, (unless, perhaps, it’s a party full of topologists.) But it is doable, and the information that you need to make available to any node in the network doesn’t need to be that large. The result is the simplest closed surface that’s locally two-dimensional everywhere. In other words, a sphere.

Without getting too deep into the grimy details, here’s an example of a rule that does the job:

  1. Start with a graph of four nodes, all connected to each other.
  2. Make  a new node that you’d like to add to the graph. Call this X.
  3. Pick a random node on the graph to add to. Call this A.
  4. Pick a second node that’s a neighbor of A. Call this B.
  5. Pick a node that’s a neighbor of A and B, with the lowest neighbor-count possible. Call this C. Call the set {A,B,C} the New Neighbors.
  6. Look at the triangles that are formed when you take X and any pair of New Neighbors. We call each of those triangles a Face.
  7. Each Face should be adjacent to another triangle that contains two New Neighbors and a different node. We call that node the Opposite (O).
  8. If the sum of the neighbor counts of the two New Neighbors is greater than the sum of the neighbor counts of X and O plus two, then remove the link that runs between the New Neighbors, and add a link between X and O.
  9. Return to step 8 and keep iterating through your set of Faces until you’re not swapping any more links.
  10. Return to step 2 to add the next node.

In practice, you have to be slightly careful how you implement this, but not that careful. In fact, I’ve found several algorithms that will work for 2D. Here’s the output from one that’s completely deterministic.

And here’s another that’s adding at random.

And here’s what it looks like inside a randomized surface, because it’s pretty.

While I’ve had great success in 2D, I haven’t found an algorithm yet that will work for 3D. I don’t think that’s because it’s impossible, it’s just that it makes my head hurt. The bubble you form in the 3D network is wrapped in 4D, which takes a little getting used to. And when you increase the number of dimensions, the number of edge cases you need to consider goes up accordingly.

Of course, there are plenty of things wrong with these networks. For a start, they’re way too regular compared to the kind we’d like to run particles across. But these obstacles strike me as eminently surmountable. So while we’re still a ways a way from being able to build a plausible Big Bang from scratch, there are at least signs that it could be done.

By the way, for those who like a challenge, I vigorously encourage anyone keen to take a crack at the 3D case. If you want inspiration, here’s a sample Java file I wrote to build a 2D surface. Also, if you’re looking for a fresh perspective on this topic, I can highly recommend Ray Hinde’s excellent new digital physics blog over at Finitism Forever. He seems to be tackling the same problem.

Advertisements
  1. June 7, 2012 at 7:41 am

    Alex, you said: “In other words, so long as a node can only gather information about its neighbors, or even its neighbors’ neighbors’ neighbors, it won’t ever know enough to untangle its position relative to all the others.”

    I’m not as pessimistic as you are about the inability of a local algorithm to build the “friendly” geometry of space. Although a node can only attempt to untangle itself locally, over iterations its influence can percolate to ever increasing distances away. If tangles always remain, as I agree they probably must, that may not necessarily stop more ordered large scale smoothness to emerge.

    Of course, finding the algorithm is the tricky bit.

    • June 7, 2012 at 6:52 pm

      Right! Finding the algorithm is key. 🙂
      I guess what I would say is that the only way I’ve found so far to smooth away tangles is by growing them. If you add the ability to delete troublesome nodes, you can even resolve problem geometries. But if you are creating a smooth graph via addition and deletion, then why start with a tangled graph in the first place?
      My first instinct when I started playing with this was to seek out a general case method for untangling a graph without addition or removal. The more I did of this, the more convinced I became that it was impossible, though I confess I don’t have a formal proof at this point.
      What I can say, though, is that if each node has only local information, and nodes in the graph can’t differentiate between which parts of the global structure they belong to, then it seems it is always possible for parts of your network to break off. This is because a group of nodes can accidentally decide to select each other to create a disconnected bubble, and no one node has enough information to see what’s happening and retain a tether. This seems to be true regardless of the dimension, and regardless of the radius of local knowledge you impose.
      Maybe graph segmentation isn’t important but it seems that, over time, this results in unconnected graph bubbles that are scoped by the radius of local information. In effect, local information becomes global, relative to the size of the remaining graph.
      It took me a couple of years of me crashing my head on the desk to reach this conclusion. So if you can find a way to reverse it, I will be both impressed and delighted.

      • Keir Finlow-Bates
        June 8, 2012 at 8:50 am

        That was exactly what I found with the people phone problem. If you have a hundred party goers added at the same time, each of whom is assigned two other peoples’ numbers, there is a very high chance that you get disconnected graphs, and there is no mechanism that allows them to reconnect to the main graph. If you assign each person the host and one other random number, then you can avoid this situation initially (I think you referred to this kind of approach in a previous post – having a central node with all the other nodes coming off it making a big spiky ball), but I can’t think of a subsequent algorithm for detaching nodes from the central node safely – if A and B detach from the host and get each-others’ numbers, they can’t rejoin.

        Also having a special node with different properties to the others isn’t nice either.

      • June 8, 2012 at 9:08 pm

        Agreed. In order to solve the problem, so far as I can tell, you essentially need to give each node the ability to interrogate neighbor relations out to the limit of the graph. In essence, that means you’re storing information about the complete network at each node. And if you’re going to have that, then you’ve effectively got a completely connected graph, not a locally connected one. You’re just not choosing to draw on all the links.
        I met a complex systems theorist last weekend who told me that exactly the same problem shows up in language. If you have entities attempting to create shared meaning from a set of symbols, trying to learn all the symbols at once leads you into degenerate states. However, adding one symbol at a time to your lexicon enables your entities to develop a common understanding. It likely that there’s a close mapping between the problems, if not a one to one relation.

  2. David Stewart Zink
    June 20, 2012 at 4:38 am

    Several aspects of this essay hurt my head too much for me to succeed at reviewing it.

    Please:

    Don’t speak of the Planet as “sailing through empty space” aka the network. A snapshot of the universe is much as you are describing it, but the network is not constantly connecting and reconnecting etc as the planet moves, the network is coincident with (your pick) the gravitational field or the “space time curvature” imposed by the planetary mass. As you know there is literally more space (more network) within a gravitational field, gravity is reflexively the push in the direction where there is more space.

    The network is attached to the objects (well, you know that’s completely backwards, the objects are an interpretation of features of the network) but the point is that the reason why the Lorentzian contraction seems unbounded is because the network itself is scaled to the object in motion. (Believe me, you’ll never get a quantum of Lorentizan contraction to work well enough to talk about electrons, neutrinos & photons, and the contraction is pretty observable.)

    If you can replicate inertia as a sort of crystal defect in an infinite flat field…

    On a related note, please:

    Don’t algorithmize like “Make a new node that you’d like to add to the graph. Call this X. Pick a random node on the graph to add to. Call this A.” It’s a graph automata: find all connected nodes (well, all sets of 4-space try-sided quintahedra?) which meet certain criteria, and add a new node to the center. I mean, that might be what you’re thinking, but you wrote about God adding nodes at random…which is just evil.

    • June 25, 2012 at 11:42 pm

      Re your points about networks and planets:

      Yes, you’re right that the planet is a feature of the network, just like everything else. However, if you’re looking for a blog that puts technical exactitude before writing-style, you’re probably in the wrong place. If you want technical exactness, I can send you some source code.

      No, the network does rewrite itself. In this model, it’s rewriting itself all the time. Just as particles are constantly spreading and regrouping due to QM effects, so are the elements of space.

      Modeling spatial distortion due to gravity by simply having ‘more’ or ‘less’ network in different regions of the system is probably the wrong way to go about it. Don’t forget that the distortion described in GR couples time and space. It’s not just a spatial phenomenon. Hence, so far I’ve had more luck simulating gravity by reordering the specific way in which the network is connected, rather than changing the spatial topology. If you want an example of particles following eliptical orbits using this approach, you can see one here:

      Re inertia and crystal defects:
      Are you familiar with any of the work on spacetime torsion? It looks interesting to me, though I’m not sure it’s meaningfully quantizable. I can’t buy the notion of spacetime as any kind of anisotropic structure.

      Re algorithmizing:
      I wrote about God adding nodes at random? Where did I do that? It doesn’t sound like me. And are you taking objection to the God, or the randomness?
      Also, if you don’t like the algorithm walk-through is that because you don’t like the style, or because I didn’t make it easy enough to follow? If it’s the latter, I’ll try again. If it’s the former, I’m not sure there’s much I can do about it.

  3. David Stewart Zink
    June 20, 2012 at 5:02 am

    Forgot to mention: of course the original conditions could have been random and then partitioned. Why not and why would we care, and how could we tell? See: AU.

    The key issue with partition is that if it were possible, is there anything that would force it to be complete? I mean an incomplete partition would leave us with an irregularly connected space. Which would be very interesting, observable, but not necessarily from where we are standing. Is there a mechanism to repair the rents?

    Two are theorized:

    1) the fundamental rules prevent rents. All nodes are members of exactly N tri-sided X-tahedra and there is an extra dimension within which the network forms a hyper-sphere, preventing edges. This does not prevent complete rents, of course.

    2) there is a mechanism to “zip up” rents (a side-effect of the basic automata, or an additional rule for nodes in violation) so all rents are eventually reduced to singularity (irremediable knots).

    • June 25, 2012 at 11:49 pm

      The point is that if you start with a random network and finite local reach for your update rule, you can’t have a rule that prevents rents. You can have a rule that prevents nodes from being completely decoupled from the network by setting a minimum neighbor count, but this doesn’t prevent your network from fissioning into smaller sets. Whatever your local algorithmic horizon is, that’s the scale at which you’re going to get fissioning.

      As for zipping up rents, or fixing horrible topologies in the network, I suspect you’ll find that the only meaningful way to do this is to grow areas of network where the geometry is smooth, and shrink those where the geometry is horrible. But if you’re going to permit addition and removal of nodes via this mechanism, why start with a random network? Addition alone is sufficient to kick your network off.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: