Observe that computers based on such loop could instantly find fixed
point of given function:
Let's take for example some NP-problem - we can quickly check if given
input is correct, but there is huge (but finite) number of possible
inputs.
So this computer can work:
- take input from the base of the loop,
- if it's correct, send back in time to the base of the loop the same
input, if not - send the next possible input (cyclically).
If there is correct input, it would be the fixed point of this time-
loop, if not - it should return some trash.
So we would only need to verify the output once again after all (out
of the loop).
Can such scenario be possible?
General relativity theory says that local time arrows are given by
solutions of some equations to the boundary conditions (big bang). CPT
symmetry conservation suggest that there shouldn't be large difference
between past and future. These arguments suggest so called eternalism/
block universe philosophical concepts - that spacetime is already
somehow created and we are 'only' going through it's time dimension.
I've recently made some calculations which gave new argument, that
such assumption actually gives quantum mechanics:
Pure mathematics (maximizing uncertainty) gives statistical property -
Bolzman's distribution - so it should be completely universal
statistics.
If we would use it to find distribution on constant time plane, we
would get stationary probability distribution rho(x)~exp(-V(x)).
If we would use it to create statistics among paths ending in this
moment, we would get rho(x)~psi(x) (quantum ground state).
If we would use it to create statistics among paths that doesn't end
in this moment, bu goes further into future, we would get rho(x)~psi^2
(x) - like in quantum mechanics.
So the only way to get QM-like statistical behavior is to threat
particles as their paths in four-dimensional spacetime.
So spacetime looks like four-dimensional jello - both 'tension' from
past and future influence the present.
http://www.scienceforums.net/forum/showthread.php?t=36034
It suggest that particles should for example somehow prepare before
they would be hit by a photon. The question is if this can be measured
(uncertainty principle)? If yes - are these times long enough to be
useful?
Observe that if the answer is yes, such computer could e.g. break RSA
in a moment. To make cryptosystems resistant to such attacks, they
should require long initialization (like based on Asymmetric Numeral
Systems).
Which physicists are those that believe in "instant time travel"?
In other words, you can't name any physicists that claim to believe
in "instant time travel"!
One way to make such time-loop I can imagine is that if we think about
for example electron-photon scattering, in four dimensions it would
have sharp edge. Physics doesn't like indifferentiability, so 4D field
should make it smooth - the electron should prepare before this
incident. But such effect would be rather week and short in time.
Maybe Mossbouer spectroscopy of the nucleus around which the electron
moves would spot something...
The other way could be by high energies - it's energetically
preferable for field solutions to propagate in time directions as
other solutions. But if they already have high energies, this
restriction weakens.
What does "instant" mean in this context? If I for instance travel
from a given "moment"* to say five hours previous to then, how much
time do I experience between leaving and arriving? None? Does that
mean that the state functions of all the particles that compose me do
not evolve at all? What stops them and what starts them back up?
This does not seem reasonable even for a single particle. Better to
consider the particle continuously experiencing proper time.
> Let's assume hypothetically something much simpler and looking more
> probable - that physics of four-dimensional spacetime we are living
> in, allows for microscopic loops which include time dimension.
The reverse-time trajectory is not a problem, the accelerations
necessary to reverse temporal momentum at the beginning and end of the
trip can be though.
> If they would have at least microseconds and we could amplify/measure
> them (Heisenberg uncertainty principle...), we could send some
> information back in time.
I'd suggest you look for a natural physical situation where a
particle can random walk at least partially in time as well as space.
An obvious candidate is any process allowing a particle to change from
itself to its antiparticle (AKA particle traveling backward in time)
and back, but I know of none offhand. If such exists, to be useful it
would need to preserve at least one of the original particle's quantum
numbers for information storage.
> Observe that computers based on such loop could instantly find fixed
> point of given function:
> Let's take for example some NP-problem - we can quickly check if given
> input is correct, but there is huge (but finite) number of possible
> inputs.
> So this computer can work:
> - take input from the base of the loop,
> - if it's correct, send back in time to the base of the loop the same
> input, if not - send the next possible input (cyclically).
> If there is correct input, it would be the fixed point of this time-
> loop, if not - it should return some trash.
> So we would only need to verify the output once again after all (out
> of the loop).
>
> Can such scenario be possible?
> General relativity theory says that local time arrows are given by
> solutions of some equations to the boundary conditions (big bang). CPT
> symmetry conservation suggest that there shouldn't be large difference
> between past and future. These arguments suggest so called eternalism/
> block universe philosophical concepts - that spacetime is already
> somehow created and we are 'only' going through it's time dimension.
There's the main snag with ideas like what you've been going on
about. Any signal that "detaches" from the main wavefront of "reality"
that is passing through your block so that it can retrace its path or
take a new path will lose its relationship to the reality wavefront it
came from. It may gain a new relationship with a subsequent wavefront
that passes through the same part of the block but it will not affect
the original wavefront reality it came from; there will be no observed
output on your hypothetical time-loop computer except what it
intercepts from other, previous universal wavefronts that passed
through the same part of the block. Such outputs will have absolutely
no predictable correlation with what you input, so it might as well be
perfect noise.
> I've recently made some calculations which gave new argument, that
> such assumption actually gives quantum mechanics:
> Pure mathematics (maximizing uncertainty) gives statistical property -
> Bolzman's distribution - so it should be completely universal
> statistics.
> If we would use it to find distribution on constant time plane
By "constant time plane" you mean a three-dimensional cross-section
through fourspace such that all points on it are "synchronized"? You
_are_ aware that "instantaneous" has only local meaning?
> we would get stationary probability distribution rho(x)~exp(-V(x)).
> If we would use it to create statistics among paths ending in this
> moment, we would get rho(x)~psi(x) (quantum ground state).
> If we would use it to create statistics among paths that doesn't end
> in this moment, bu goes further into future, we would get rho(x)~psi^2
> (x) - like in quantum mechanics.
> So the only way to get QM-like statistical behavior is to threat
> particles as their paths in four-dimensional spacetime.
No, that isn't "the only way". It is a very good way to get the
immutable block you go on about above; in it, statistics are an
illusion.
I favor a literal version of the Many Worlds Interpretation; to see
the whole universe as a single wavefunction evolving through time,
taking all possible paths from past to future the exact same way an
electron's wavefunction takes all possible paths from say a vacuum
tube's cathode to its anode. What we see in the tube is the "sum over
histories" of the electron's trajectories, and what we observe in the
universe is _its_ sum over literal histories.
That however does not include any obvious way to get your desired
future-to-past signaling capability. Have you looked at Wheeler/
Feynman's advanced wave concept? AFAICT it is not necessarily
incompatible with the MWI.
Mark L. Fergerson
Maybe. But the computational cranks from QM still need reminding
often
that the people with computer brains developed optical computers,
parallel processing,
laser disks, laser-guided lasers, fiber optics, CD+rw, DVD-rom, DVD-
ram,
and post AT&T compilers, because the biggest problem with computers
is getting them out of loops, rather than into loops.
> General relativity theory says that local time arrows are given by
> solutions of some equations to the boundary conditions (big bang). CPT
> symmetry conservation suggest that there shouldn't be large difference
> between past and future. These arguments suggest so called eternalism/
> block universe philosophical concepts - that spacetime is already
> somehow created and we are 'only' going through it's time dimension.
> I've recently made some calculations which gave new argument, that
> such assumption actually gives quantum mechanics:
> Pure mathematics (maximizing uncertainty) gives statistical property -
> Bolzman's distribution - so it should be completely universal
> statistics.
> If we would use it to find distribution on constant time plane, we
> would get stationary probability distribution rho(x)~exp(-V(x)).
> If we would use it to create statistics among paths ending in this
> moment, we would get rho(x)~psi(x) (quantum ground state).
> If we would use it to create statistics among paths that doesn't end
> in this moment, bu goes further into future, we would get rho(x)~psi^2
> (x) - like in quantum mechanics.
> So the only way to get QM-like statistical behavior is to threat
> particles as their paths in four-dimensional spacetime.
> So spacetime looks like four-dimensional jello - both 'tension' from
> past and future influence the present.http://www.scienceforums.net/forum/showthread.php?t=36034
Huh?
> because the biggest problem with computers
> is getting them out of loops, rather than into loops.
Wrong, again. If there wasn't a looping mechanism, you would
be digging ditches with a shovel right now instead of typing
at your terminal which houses a complete computer system.
<snip>
/BAH
I would also think that having the same atom in two places at the
same time would be an annoyance.
<snip>
/BAH
So if there would be causality time-loops, they should be already
stable - without paradoxes. It's argument against stable wormholes,
but I'm talking about some very sensitive microscopic phenomena.
So if the algorithm I've presented would create paradox - the loop
couldn't stabilize, physics should destroy the weakest link of this
loop - predicting/transfering back in time. For example by shortening
this sensible relation.
If there is a stable loop (the answer is true), physics should find it
and in fact it would look like the verifier checked only this input.
So there should be no problem of getting out of the loop - anyway it
would look like it was done once.
I though if we could reduce the required number of bits transferred
back in time, and it looks like one (B) should be enough (this
algorithm intuitively looks less stable?):
- if B then 'input' -> next possible 'input' (cyclically)
- if 'input' verify the problem
-- then transfer back in time B=false
-- else transfer back in time B=true.
If it could it should stabilize on B=false and some solution.
Such algorithm means that it uses input(B) from some physical process
which can predict for example if there will be photon absorbed and on
the end emits this photon or not.
If physics could stabilize this causality loop, it should be done. If
not - it would be stabilized by making that the prediction would gave
wrong answer.
If for given instance of problem, there would be created dedicated
chip - which makes calculations layer by layer (without clock), it
should make the verification in nanoseconds - such jumps are easier to
imagine.
This suggests some nice though experiment: make such loop but much
simpler - just spatial (it's a tube in four dimensions): take a chip
with verifier of some problem and the algorithm from the first post.
Now instead of sending 'input' back in time just connect it to the
'input'.
Such loop should quickly check input by input and finally create
stable loop if it can...
Is that really????
This scenario require the clock, doesn't it?
What if there wouldn't be a clock...? :)
Shouldn't it find the solution practically instantly?
Right again. Since the Looping mechamism is only ever there
as a mathematical trick. Which is also the also why the much
smarter engineers even invented the Post AT&T Fiber Optics,
E-Libraries, E-Books, E-Publishing, On-Line Publishing,
GPS, Post Ford Batteries, Adaptive Pv Cell Arrays, Post GM
Robotics,
Post McDonald's Holograms, Drones, AAVs, AUVs, RISC++, USB, XML,
CD+rw, DVD-rom, DVD-ram, DVD-rw, HDTV, and laser-guided lasers
for the Computeroid Stooges. Since Quantum Computers don't exist
now, and never will.
If there wasn't a looping mechanism, you would
> be digging ditches with a shovel right now instead of typing
> at your terminal which houses a complete computer system.
>
> <snip>
>
> /BAH- Hide quoted text -
>
> - Show quoted text -
GR, field theories, CPT conservation strongly suggest eternalism -
that spacetime is already created and stable. So causality loops/
questions we will ask, are already stabilized/answered.
And from the perspective of our perception of time, such loop will
last only a moment and will make classical verification only once - of
correct solution.
There is no problem in sending it classically somewhere later.