Preamble

This article is a stream of consciousness theory exercise. It is presented to inspire ideas, perspectives, and hopefully fuel inquisitive minds to learn more. It is likely that there are fundamental reasons why many of my ideas are wrong, and I simply haven’t learned them yet. You may also see me taking the long way around to logic my way back into known quantum physics principals.

While it is hopefully an interesting read, it is not backed by sources, and is in no way meant to be a statement of fact. It’s a series of thought experiments that attempt to use a modified perspective, and see what would happen if you approached a problem that way. Granted, much of it IS rooted in known science, but my thoughts often veer into the experimental. With that in mind:

What if

What if wave function collapse and quantum uncertainty can be shown as a function of all possible outcomes? Effectively the wave IS all of it’s possible outcomes. When the wave has an end state, or goal, only the path that ends in that goal is realized, and the others simply never existed. This combines retro-causality with collapse.

Before the end state, the wave exists in all possible states. Once it has a goal, only one state is possible.

This is how an AI with re-enforced training works. Eventually it will find one path to the goal that works, and if conditions remain the same, it can use that exact set of instructions to find the goal. It can drop all other permutations or potentialities. But how then, does the double slit experiment work. Theoretically the goal would be one or the other, once the goal is established, the particle or wave was always that one potentiality.

Because we see the wave splitting to both slits, what is keeping them from collapsing. We could measure a property, and immediately forget it, but that doesn’t change that the property was measured. So if we take this principal and apply it to machine learning, the AI learns what the next step would be, effectively collapsing uncertainty. Then it weights it’s actions towards that known point. To “forget” that known point wouldn’t simply be forgetting the position the object was in; the weights and ability to predict the next action would still be there. Yes, we don’t have the empirical data, but we have the neural net’s reaction to it. The state of the neural net has changed as a result of it’s interaction with the car’s known state. So for the particle to enter into it’s super position state, you would need to reverse the state of everything it effected as well.

In a simulated environment, this would require recording every single state of the neural net and relating it to every single state of the car. The longer the period in time between the first state of interactions and the goal state, the potential states become exponential. This would quickly climb beyond the possibility of storing.

The only way to do this would be to discard any potentialities that do not result in moving closer to the goal. But if you do this, you eliminate the ability to return to any other set of states. You can rewind along your already defined path, and explore other potentialities, but you would never know if another potential starting point would be more efficient or could also end at the goal, without running those potentialities simultaneously.

So the longer the goal is from the starting point, the more potential paths there are. Once a path the goal is found, you must re-run the simulation from the start to find a new path. The shorter the path, the more likely you would arrive at the same solution because the potential paths are shorter.

To scale goal state potential paths, you need more AIs, brains, any pattern sorting system considering the path from state to goal. Because of the linear nature of time, no matter how powerful your brain or AI is, it can only consider one path at a time. A brain being defined as a system that contains the current state of knowledge which has been effected by knowing the state of something. Your neural net can not be in two states at once. You could have two separate neural nets that exist in separate states. But at that point all possible future permutations of those neural nets will be inherently different. Possibly similar, but never the same.

This suggests that “wave function collapse” is a retro-causal effect that happens when a particle changes the state of something. The only way to keep that from happening is to revert the state of anything changed by it’s interaction. IE reverse time. At small scales, this effect would be much more localized. The number of things that would need to reset would be smaller because the things interacted with, never interacted with anything else. We could call this, wave function elasticity.

The larger this interaction gets, the more “set in time” the wave function collapse becomes. Potentialities must be eliminated or they would lead to branching infinities.

The more dense an object is, and the more energy it has, the greater this interaction becomes, as the number of states that gets changed increases. It would also suggest that wave function collapse may not be entirely brittle. The higher the states changed by an interaction, the more brittle the wave function becomes. If, for instance, only one measurement is taken only one potential state existed. If you again measure the same property of the same object, it has now effected multiple states of objects over time. It’s potential solutions to that end state shrink. Conversely, the potential states of other properties that would result in that end goal increase.

This is, in some ways just a re-statement of the Heisenberg uncertainty principal. But it incorporates a proof based on retro-causality and suggests a limit to the knowable potentialities of any system that effects another system.

So what we perceive as gravity, or a warping of space time, could be distilled to a quantifiable reduction of potential states due to interactions with a system. Not just a reduction of future potential states, but past ones. To avoid infinite storage, or every possibility existing at once, paths that do not end in that system changing must be eliminated. Otherwise the system wouldn’t change. I don’t know if it was the intended meaning, but the statement, “I think, therefore I am,” takes on a very interesting causal meaning.

A better way to look at wave function collapse would be a change in the causal nature. Instead of thinking of the act of measurement as collapsing the function we should instead think of the change in state of the goal as the driving factor behind the collapse. This is a bit paradoxical because it suggests that the present does retroactively effect the past. It could also mean that our present is decided by retroactive changes from the future.

In this scenario, the future state of something that interacts with something we see in the present has been decided by it’s interaction with that thing in the future. Of course, the only way to prove this, would be to effect the very interaction we could only presume was the cause of this state in the first place.

I’m not setting out to prove simulation theory, but the parallels are interesting. Light, or anything without mass, moves as fast as anything can because it generates the least state changes or interactions. The more state changes something induces, the slower relative time moves to “process” those interactions.