On Monday, 31 July 2017 18:11:41 UTC+1,
danco...@gmail.com wrote:
> On Sunday, July 30, 2017 at 2:36:17 PM UTC-7, Eric Baird wrote:
> > Lets say that the star and atom have no relative velocity.
> > By the time that both signals have left [2] and entered [3],
> > the light no longer cares where it came from, so if we change
> > velocity and watch the resulting change in frequency, both
> > signals need to Doppler-shift by precisely the //same// ratio.
>
> Right, and indeed they do. This was explained in the previous message, in terms of the macro behavior of the control volumes enclosing regions of significant curvature. See below for a more detailed explanation.
Okay. So you, me and Tom all seem to agree that there must be a single motion-shift relationship for different bodies regardless of their obvious macroscopic gravitational field strength (or lack therof). We just disagree on what that relationship ought to be.
You think all simply-moving bodies must obey SR, while Tom seems to be saying that SR is only an approximation, that the motion-shift of a moving star is not an SR problem, and that nobody who understands GR would expect its shift to agree with the SR predictions. My position is that the shift on a moving star is not an SR problem and the star's relationship must be non-SR, but ... since all moving bodies have to obey the same shift law, this means that SR doesn't just not correctly describe the star's behaviour, it also doesn't correctly describe the behaviour of any other moving bodies (except as an approximation). IOW, it's a useful simplified "engineering theory", but not Fundamental Truth, or a correct foundation theory for gravitational physics.
> > SR requires the atom to obey SR, and GR requires the
> > star to obey a non-SR relationship. So a gravitational
> > theory that reduces to and incorporates SR physics
> > requires us to see both signals Doppler-shift by
> > //different// ratios.
>
> Not true. You've fallen prey to the same misconception that misleads people into thinking (for example) that light from binary stars as seen on Earth must appear at widely different positions due to aberration, since the light was emitted from stars with different relative velocities. You've just substituted different sized objects (with different gravitational fields) for different relative velocities. The explanation of the fallacy is as follows.
>
> In your example, the light waves arriving at a point in [3] are essentially plane waves, normal to the line from the star and atom to the observer in terms of the inertial coordinates in which the hydrogen atom and star are both at rest. The waves have particular frequencies f1 and f2 in that same coordinate system. Now, if the observer in [3] is at rest in those same coordinates, the waves will have the same direction and frequencies f1 and f2, but if the observer is at rest in a different system of coordinates, moving with some speed and at some angle relative to the first system, the angle of incidence and the frequencies of the waves will be given by the simple relativistic Doppler formulas of special relativity. It doesn't matter how the planes waves originated. The will both have the same Doppler shift factor and aberration angle.
Actually, I'm agreeing with you that that argument and its conclusion seems logically correct under SR-GR ... but I'm also pointing out that in the same framework, arguments for the //opposite// result also seem similarly logically unavoidable. Hence the logical inconsistency. It's not that I don't understand the arguments, it's that I understand that there are too many of them, with conflicting outcomes! We can't solve the inconsistency by picking the result that we like, declaring it correct, and saying that the other one is therefore wrong, because someone else can come along and look at the same logical framework and make the opposite logical argument with similar validity. A theory needs to make the same predictions regardless of who's in the driving seat, otherwise the predictions belong to the operator and not the theory.
----
One of the problems with pathological logical systems is that the local logic can be absolutely faultless at every point in the system, but the global logic can still be screwy.
Imagine that you're given a Moebius strip and told that (1) it's a specific looped strip of clear flexible plastic with an unspecified number of twists in it, and that (2) the surface is orientable. This would mean that the total number of twists is an even number when viewed from any angle, and if we attach a letter "d" to the strip, and slide it all the way around and back to its starting point, it's still a "d". If the strip has an even number of twists then statement (2) is true, but if the twist number is odd, the surface is //not// orientable, and the shape returns to its start-point on the wrong side of the strip, with a mirror reflection (the "d" becomes a "b"). Given that the strip is twisty, that the twists don't have fixed locations, and that the strip may spontaneously generate or lose pairs of adjacent twists with opposite senses, how do we check that statement (2) is correct in the context of the provided physical strip (1)?
Our normal method of checking logical consistency is to use an incremental approach ... but in this case, if the strip is non-orientable, the error doesn't show up until we look at the entirety of the strip. Every single individual piece of the strip looks like simple two-sided section of plastic, there's no fault or disjoint anywhere, and we can reason (wrongly) that if we've checked every inch of the strip and found that it's toplogically two-sided, that obviously the entire strip must have two sides.
Someone pins our original location on the strip to the table with a drawing-pin, does the same to a second location on the strip, an unspecified number of twists away from our letter's start position, and asks, "will your letter, moved to that position, be "d" or "b"? "
We slide the letter around, and find that it's still a "d" – we can then "prove", faultlessly (assuming orientability), that the letter at that position must always appear to be a "b" regardless of route taken. But the proof is wrong if (2) is wrong.
Our friend repeats the exercise, but slides the letter around to the second location using the other route. Because the total number of twists is odd, and the number of twists along //our// path was even, the path between the two positions through the remaining section of strip must have an odd number of twists, and the letter arrives at the destination as a "b".
Our friend, assuming orientability, can "prove" that the letter will always appear //reversed// at that location, regardless of route taken. They can argue that because the total number of twists is defined as even, then our path, like theirs, must have an odd number of twists.
The logic that both we and our friend apply is utterly faultless, but both of us manage to prove different and physically incompatible results, which is a giveaway that something is wrong globally. We've been told that the strip has been made in such a way as to create an orientable surface, and it hasn't. But we can examine up to 99.99'% of the strip's length ... //any// 99.999'% of the length ... and find nothing wrong. The giveaway is that it's possible for two people, provided with the same strip, the same challenge and the same received information, to faultlessly prove two different outcomes.
In the case of you and Tom, you both seem reasonably experienced at dealing with SR and GR problems, you both seem to agree that the theory is "good" and has no real faults and that you know how to apply it, and you both agree that strong-gravity and weak-gravity bodies should show the same shift relationship.
But //you// apply the theory and prove that since SR must be correct, both objects clearly HAVE to obey the SR shift relationship, while Tom starts from the gravitational angle and seems to be saying that the shifts follow a //non//-SR relationship (Tom: "SR IS IRRELEVANT!").
So either one of you isn't doing it right, or you both inhabit a theory that's similar to a Moebius strip or a Penrose triangle ... in which case, the problem is not that you can't both //prove// that your conflicting positions MUST be correct ... the problem is that you both //can//.
In the exercise with the strip, the "reasonable but wrong" statement was that if we take two sections of flat paper and join them together, with flat joins, the resulting loop must be double-sided.
In the case of "SR+GR", the "reasonable-but-wrong" statement was that if we take a gravitational curved-spacetime theory of physics, it must perfectly mesh with and reduce to a flat-spacetime theory of physics.
Both statements seem obviously correct, until you're presented with a counter-example.
> > If we revert the SR shift equations to the older
> > Newtonian optics set...
>
> But the Newtonian optics has been demonstrated to be wrong, which is why it was discarded.
No, the "Newtonian optics" Doppler relationships hadn't been demonstrated to be wrong in 1905, and (as mentioned) in many cases give indistinguishable or even identical physical results to the SR predictions. So there are limits to how wrong the NO relationships //could// be, experimentally, without the SR set also failing to agree with experiment.
The main "implementation" of Newtonian theory applied to light – Ballistic Emission Theory ("BET") – //was// soundly disproved thanks to its lousy signal flight-time predictions, but it wasn't the only possible implementation. We can also implement NO using an acoustic metric, in which case we get wave-theory compatibility: light is still emitted at cEmitter, but it's now received at the receiver at cReceiver, with the SoL at all points in between determined by local environmental factors.
Trying to disprove the NO Doppler equations themselves without relying on flight-time arguments is surprisingly difficult. Einstein apparently still hadn't managed to come up with a proper disproof by 1913, and was pleased that the DeSitter experiment gave a disproof of BET, because it meant that he thought he could finally put the competing equations away and stop worrying about them.
Part of the SR experimental community also seemed to realise that in many comparisons of the SR and NO Doppler predictions, it would often be difficult or impossible to obtain a convincing result showing which shift predictions were best. The response seems to have been _to_choose _not_to_make_that_comparison_ – to instead pick a less adequate reference test theory that said there was no theory in the range SR<=NO that required testing, that the legal range to test was therefore just CT-SR, and that any data falling in the "impossible" SR-NO range could be safely corrected for or calibrated away as experimental error and treated as supporting SR.
Excluding the range that included NO meant that more experiments could now be claimed to validate SR to very high accuracy and significance, even where there was no actual difference between the SR prediction and its C19th Newtonian predecessor. We used the choice of test theory to eliminate Newtonian predictions from the analysis, with Darwinian natural selection favouring the SR test theory that would allow experimenters to present their results as having maximum significance, over other test theories that might have been more credible.
It's a little bit like what happened at Enron. The Enron board probably never explicitly conspired with their lower management to get their internal financial data misreported ... They simply created a structure that rewarded people financially and socially for over-reporting, and made it clear that everyone was doing it, that this made the board happy, that it counted as loyalty to the company, and that there was no obvious downside. Within Enron, you got social disapproval of your peers and career blight for //not// inflating figures. With a lot of SR testing, all one normally had to do to enhance the claimed significance of one's results was to cite the same test theory papers that one's colleagues were already using.
> > ...and also revert SR's redefinitions of distances and times...
>
> It isn't a matter of definitions of distances and times, because we can express any theory in terms of any system of space and time coordinates we like.
Yep, but to do comparative analysis between two theories, we can't always plug an SR-derived velocity value into a Newtonian-style calculation or vice versa. That would involve disproving one theory by //presupposing// that the other was correct. That argument could be used in either direction.
> The measures of distance and time conventionally used in special relativity are the unique measures that constitute inertial coordinate systems, i.e. systems in which the equations of physics hold good in their simple homogeneous and isotropic form. The insight of special relativity is that these unique coordinate systems are related by Lorentz transformations, not Galilean transformations. The Lorentz invariance of all physical phenomena has been experimentally established to great precision.
But Newtonian optics doesn't use the "Galilean" Doppler relationships. Those seem to be "bluer" than SR by a Lorentz factor, while the "NO" Doppler shifts are redder than SR by a Lorentz factor. The NO set are actually redder than the Galilean set by a Lorentz-//squared// factor.
> > Presumably you've now gone away and done a few
> > sample calculations of the SR and NM shift predictions
> > using theory-specific values of v, and found that I'm
> > right.
>
> I've certainly made that comparison, as have countless others, but found that you are not right. The relativistic Doppler effect is measurably different than the non-relativistic (Galilean) Doppler effect, and measurements have confirmed the relativistic effect.
Then you're using the wrong Doppler equations <g>
Depending on the setup, sometimes there are measurable differences between NO and SR, sometimes there aren't. For instance, if you take Einstein's 1905 calculation of E=mc^2, and you swap out the SR definitions and relationships for the NO set, you get precisely the same final answer.
I'm emphatically not arguing for the non-relativistic Galilean Doppler effect. Those predicitons are awful, and they're not what Newtonian optics predicts.
I'm a relativist. I do relativity theory. The relativistic effects normally present in SR are if anything //nominally stronger// under the alternative system, not weaker. SR predicts a Lorentz transverse redshift? The alternative a Lorentz-squared redshift. It's textbook relativity's bigger, stronger, brighter brother.
----
What Einstein described as "classical theory", which SR was supposed to be compared against, assumed the validity of Newtonian theory for calculations involving matter (because, SR postulate #1, relativity), but tyhe equations of a stationary aether (wrt the observer) for light (because, SR postulate #2, "we know that the SoL is globally constant for the observer"). Those two sets of predictions were always logically incompatible, because the Newtonian energy and momentum relationships, applied to light, give a predicted recession Doppler shift of
[1] f'/f = (c-v)/c
, while the corresponding Doppler prediction for a speed of light globally fixed wrt the observer is
[2] f'/f = c/(c+v)
These are two physically different predictions: if a body recedes at half background lightspeed, the first equation (NO) predicts a halving in viewed frequency, while the second equation predicts a frequency that is two-thirds the rest frequency. The first predicts f'=0 for lightspeed recession, the second predicts a viewed frequency only halving for lightspeed recession. [1] is associated with a Lorentz-squared transverse redshift ("aberration redshift"), [2] is associated with no transverse redshift at all.
What SR does is to takes these two conflicting halves of "classical theory" and resolve the conflict by taking their geometric mean, so that we end up with
[3] f'/f = SQRT[ [1]×[2] ] = SQRT[ (c-v)/(c+v) ]
, which deviates from both earlier predictions by the same ratio, the Lorentz factor. By consistently multiplying in or dividing out this Lorentz factor we've reconciled the apparent conflicts produced by the two SR postulates, and we have the equations of special relativity.
> > In loads of cases where SR is touted as making
> > unique predictiosn that wouldn't be expected if
> > the theory wasn't right, it turns out that the
> > "new" predictions are indistinguishable from or
> > identical to what we get with C19th theory.
>
> Not true. Whenever claims like this are raised, providing specific examples based on Galilean relativity, they are always easily debunked.
And as mentioned, I'm emphatically not trying to do anything "Galilean"!
You're the one introducing the word. Googling "Galilean Doppler" throws up a bunch of pages that seem to use equation [2]. The suggested alternative system instead uses equation [1].
You're claiming superiority for mainstream theory by misrepresenting the predictions of the alternative. That's no way to win a scientific argument.
>Of course, the term "C19th theory" is ambiguous, and one could argue that Lorentz invariance was already implicit in Maxwell's equations, and hence special relativity (at least for optics) should be classified as 19th century theory. But the point is that the relativistic Doppler relations are correct.
But you don't //know// that. Because the SR and NO Doppler equations are often very, very difficult to tell apart.
Remember, for all its faults, the Doppler relationships associated with nasty old ballistic emission theory were //also// technically "relativistic". Ballistic emission theory didn't assume a preferred frame for light. And it successfully predicts transverse redshifts and the particle accelerator lightspeed limit, results that members of the SR community often claim to be SR-specific.
> > The closeness of the SR and NO predicitons was
> > already a "thing" in the 1900s.
>
> Of course. Special relativity gives many results that differ from pre-relativistic predictions by only small amounts (e.g. second order) in many circumstances. That goes without saying, since it took a long time for special relativity to be discovered, because the pre-relativistic predictions base on Galilean worked "pretty well" in many circumstances. But they are incorrect to higher orders, and this makes a huge differences for the understanding of many of the most fundamental physical phenomena.
Again, forget about anything Galilean, those relationships suck.
And that's presumably one reason why they're the standard reference that SR is compared against. Because compared to //them//, almost anything looks good! :D
It's a piece of sleight-of-hand. You've being encouraged to think that the only choice here is between relativity theory (represented by SR), and theories that don't obey the PoR, and that's not the actual choice.
If you're suddenly using terms like "Galilean" (which I don't think anyone else has used in this discussion), then it sounds to me as if you may have recently been reading up on SR testing. I think I ought to warn you that a lot of that material is junk. It's designed to show off SR in the best possible light, it compares SR to the worst possible alternatives, and a load of the purported analysis is simply mathematically and historically wrong (especially when you go back to the stuff written in the ~1920s).
Some of this is Einstein's fault: he carefully crafted narratives to explain the case for SR, where he presented a bad initial system or a bad initial argument with problems, and then presented SR as the solution to those problems ... but the "previous" system often seems to have been artificially constructed by him as an expositional tool, or conveniently skips past a generation of better historical theory. These explanations shouldn't be taken as representing what most people really believed before special relativity came along, or the actual range of possibilities.
> > If you simualtaneously measure the forward blueshift
> > and rearward redshift on a particle moving in a straight
> > line...
>
> You're just describing Ives-Stillwell, which gave results consistent with special relativity. This is one of the (many) experimental refutations of your beliefs.
No, it's about the only one. And it was done by a couple of guys who didn't believe in SR, and who considered their experiment to be validating Lorentz aether theory instead. And AFAIK there's never been a successful replication.
Now, flip to a hypothetical alternative reality. If //current// theory had just one clear conflicting result, and that result had never been replicated, and had been carried out by a couple of aether theory guys, the SR community could say, "Pah, the guys were obviously non-SR fringe researchers, and the lack of replication just shows that it was a rubbish experiment!")
We also have the Hasselkamp transverse shift experiment (also apparently unreplicated!), which found double the transverse redshift predicted by special relativity. The key thing to me about the Hasselkamp test is that the hardware reported twice as much redshift as expected, the standard SR test theory said "but that's not a legitimate result", and the experimenters basically had to invent a half-degree detector misalignment to explain away half the effect, and then use statistics on the remaining half to argue that the remaining result was compatible with SR to a few percent. If they'd had more time, they'd have been entitled by their test theory to redo the test with half the redshift manually calibrated out, to bring the results into the "proper" range. The fact that they were short of time meant that they had to put the "correction" into the analysis phase where it was visible, rather than the experimental phase where it could have remained hidden.
So what the experiment established was that C20th SR test theory let experimenters override hardware results and manually change a ~100% overshoot into a ~6% agreement, and still pass peer review. And that makes any test that used that same test theory unreliable when it comes to comparing SR with NO. Any tests carried out under those rules would be allowed to convert pro-NO hardware readings into a pro-SR result after adjustment.
In fact, since any experimental measurement is going to be liable to a certain amount of noise and error and statistical scatter, we can argue that perhaps we'd get the best pro-SR results in a test assuming the range CT-SR if the real shift relationships //weren't// those of SR, but were somewhere in the forbidden range, SR-->NO.
> > > You have not defined a "Cliffordian universe"...
> >
> > In the sort of universe imagined by Clifford, physical
> > matter is associated with distortion fields, and the
> > interactions of matter with matter are expressable as
> > the interactions of the associated fields.
>
> Sure, that's the dream of a unified field theory, but your enemy is not general relativit, it is quantum field theory (and the lack of an equivalence principle for non-gravitational forces). Actually, general relativity accomplished Clifford's dream for the force of gravity, representing it as curvature related to each particle of mass,
My understanding was that nobody had ever managed to derive the SR equations of motion as an exact solution for particles with gravitational fields.
I seem to remember someone derived equation [1], "the Newtonian optics" equation that I'm suggesting is the correct solution for moving gravitational masses, applicable everywhere ... and then //presenting// it as a derivation of the the Newtonian approximation of SR's equation[3].
> but no one has been able to duplicate that success for the other forces of nature. But more fundamentally, please note that Clifford's conception of a continuous manifold reduces to flat tangent spaces at each point.
Yeah. Flat tangent spaces containing zero particles capable of acting as observers or observed masses, and also sufficiently far from any point-masses for their fields not to measurably intrude. So in a Cliffordian physics, the flat tangent spaces aren't just the limit at which the number of available particles to do physics with drops to //zero//, it's beyond that ... it's the limit at which there are not even any particles available to do physics with //nearby//.
In a Cliffordian universe where all-physics-is-curvature, the geometry of a flat tangent space is geometry-but-not-physics. Pretty much by definition.
>He did not contemplate a non-Riemannian or discontinuous surface, so he cannot be cited in support of your belief in a discontinuous manifold.
I have no such belief.
I most emphatically DO NOT believe in, suggest, or promote the idea of a discontinuous manifold. If someone's told you otherwise, you should stop listening to them, and stop repeating what they tell you, in public, as fact.
Choose your sources more wisely.
If you're confidently making declarations of what I supposedly believe and getting //those// wrong, then I have to assume that your similarly confident statements on other things that are more difficult for me to check may be similarly wide of the mark.
> > In a Cliffordian universe ... one in which all physics
> > can be described in terms of curvature ... a curved-spacetime
> > geometry doesn't reduce to flat-spacetime phsyics, it reduces
> > to flat-spacetime //non//-physics.
>
> When you talk about describing things in terms of curvature, you need to give at least some hint as to what you mean. For example, Clifford imagined, in a vague sort of way, that particles might consist of regions of high curvature, and those regions might propagate along extremal paths (i.e., geodesics) in the ambient curved space due to other "particles", so the curvature of the space would affect the motions of "particles", and the entire universe is a single continuous space with curved regions propagating here and there. In general relativity the gravitational interaction is modeled in just this way, except that the fold is spacetime instead of space.
Pretty much. The suggested alternative system "steals" most of existing GR but rejects the SR component as incompatible with Clifford's idea (because it models particle interactions in the //absence// of curvature), and then instead of SR's global c-constancy , it uses GR-style principles to create local lightspeed constancy at the level of inertial physics, as a curvature-regulated effect.
> But you can't be referring to anything like this, because both of these entail flat tangent space (or spacetime) at every point (event). So, since you aren't talking about Clifford's idea, and you aren't talking about Einstein's idea, you need to say what YOU (Eric) are talking about.
I'm talking about Clifford's idea that all physics is curvature (but as you say, in space and time rather than just space), and Einstein's ideas about general relativity (apart from the part about reduction to SR, because SR describes physics //without// curvature, which doesn't correspond to Clifford's concept).
I'll try to explain it again:
... In a "Cliffordian" physics, every particle of matter that exists in a region and is capable of participating in meaningful physics, is associated with a "dent" in the shape of that region's light-geometry. If we zoom in so far on a part this surface that our field of view appears "effectively flat", then the region that we are looking at will by definition contain no dents, and will therefore contain no particles.
There will //by geometrical definition// be no physical matter inside that region to observe, and also //by geometrical definition// also no physical matter capable of acting as an observer. So there's no meaningful observer-physics taking place there. A flat zoomed-in region represents not the geometrical properties of physics, but the geometrical signature of an absence of physics. There's nobody home.
In theory we could still populate the flat region with hypothetical "purely mathematical" objects and observers that don't have associated bumps like the "real" particles, and derive the relationships of these abstract mathematical beings as special relativity. But the geometrical laws of physics that that we derive for these beings will then be different to the geometrical laws that we derive for the //real// observers that //do// have relatively moving bumps.
So the SR relationships derived this way (in a Cliffordian universe) don't apply to matter, and do not correspond to physical law. There's still an obvious geometrical reduction to //flat spacetime//, but there's not an associated reduction to flat-spacetime //physics//, so the Cliffordian description is a logical counter-example to the idea that gravitational theory has to reduce to the physics of special relativity.
> > In 1960 we found that with rotating reference systems,
> > the inertial and gravitational descriptions generated
> > irreconcilably different geometries, if the "inertial
> > physics" description came from special relativity.
>
> That is simply not true. The issue of rotating frames was one of the main areas of focus when Einstein was developing general relativity in 1911-1915, and the criticisms of the equivalence principle related to rotating coordinate systems were fully resolved.
Well, according to the American Journal of Physics, some new criticisms appeared in early 1960.
> The incidental little paper by Schild that you have fixated on does not contain any significant or novel insights, and has no effect on the meaning or interpretation of general relativity.
Except that the paper stated that the interpretation of general relativity ==was== specifically being changed from that point forwards. Am.J.Phys agreed. And that's what seems to have then happened.
> Schild would never have dreamed of denying the obvious fact that the tangent space at any point on a smooth manifold is flat.
Not all mathematics is physics.
> > So either SR was wrong or the GPoR wasn't fully general.
> > Either way, the 1916 theory was invalidated.
>
> Not true at all. Einstein's 1915 theory of relativity is still the current best theory of gravity,
It's probably the simplest of a bad batch of theories, all of which suffer from the same inherent design flaw.
If our current peer-review standards require that all classical theories of gravity reduce exactly to SR as a limit, then every textbook theory that meets that condition will have the same incompatibility with QM, and the same basic internal logical defects.
It's like, Einstein's general theory is a three-legged horse, and it's the fastest horse in a race whose entrance requirements are that four-legged horses aren't allowed to participate. Being the best in that field is really nothing to boast about.
> and has passed all experimental tests.
<cough>darkmatter</cough>
>It was neither changed nor re-interpreted in 1960 or any other time.
States you. The historical record suggests otherwise. Were you a participant in the 1960 discussions? If not, where are you getting your information?
> Again, Schild's banal and unoriginal little comments did not invalidate general relativity, and he never claimed that they did.
No, Schild said that after some non-unanimous discussion, with GR as it had previously been understood, _the_invalidation_was_already_generally_accepted_as_having_happened_.
However, he then argued that since the theory couldn't or shouldn't self-invalidate, that the arguments that led up to the invalidation must be considered invalid within the context of the theory, and that from that point on, we had to adopt a different attitude to the validity of the principle of equivalence in rotating-body problems.
Schild didn't take credit for the arguments himself, or say who else had been involved in the discussions. I'd rather like to know who else was involved, and whether Schild was acting as a mouthpiece for someone else.
IMO it's a very strange paper, but Am.J.Phys. (whose aim is partly to document the "correct" way to apply and teach physics theory) decided that it should be published.
A post-1960 viewpoint that I sometimes come across is that GR's mathematical machinery is correct, but the original principles that generated that machinery were "naive", and now we were in a better position to say with hindsight how the theory //should// have been defined. I was in conversation with a math guys some years ago and whingeing that current GR no longer seemed to attempt to conform exactly to the general principle of relativity, and he basically shrugged and said, "Yes, but who cares?" – according to him, the concept of the GPoR was naive and only a rough guide, the mathematical core of modern GR was not the GPoR but the principle of covariance, covariance was much more important, and by rights the theory would be better off called "the theory of covariance", but we were stuck with the name GR for historical reasons.
> > People just make stuff up to support
> > SR about "what would happen if SR wasn't correct",
> > without bothering to check whether its true.
>
> Not true at all.
You've just had an example. Cliff Will stated that atmospheric muons wouldn't reach ground level unless special relativity was right, the Newtonian calculation has the muons reaching exactly the same depth as they do under SR.
> Beginning in the early 1900's there were tests with accelerating particles, and the predictions of various competing theories were compared closely with the data. All the theories gave very similar predictions, so the experiments had to be made more and more precise in order to distinguish between them. Eventually they were able to establish that the predictions of the Lorentz-Einstein theory were correct, i.e., they found that the phenomena were Lorentz invariant.
Well, Will's confident assertion that we knew that SR was correct because of known muon behaviour was badly wrong ... how do you know that the other pro-SR analysis is any more reliable?
> > Has there ever been any research on how the "Lorentz
> > invariance" issue changes if we allow the existence of
> > a solution that is nominally redder than SR by an
> > additional Lorentz factor?
>
> Yes, there has been.
So you can give me a reference to the paper or book section that examines this question?
>Abundant experimental evidence (such as discussed above) confirms that all phenomena are Lorentz invariant, ruling out any additional Lorentz factor.
Well the test theory under which the data was collected will be kinda important.
> > What are the relativistic rules for inertial physics
> > if lightspeed is assumed, more realistically, to only
> > be locally constant? The result is a different theory.
>
> Right, the result is general relativity, in which special relativity is valid on the tangent space of the spacetime manifold at any point.
But the geometry of a tangent space does not necessarily describe the real-world laws of physics for particulate matter.
It might be more like the Phantom Zone in the Superman comics, a place that exists outside normal time and space, and has its own different laws (and where bad people are sent as a punishment!).
The tangent space argument makes lightspeed nominally //globally// constant across the artificial tangent space.
If (as I said) lightspeed is assumed to //only// be locally constant, we can suggest that a lightsignal has a velocity of cEmitter at the emitter and a velocity of cObserver at the observer (local c-constancy), that the velocit(ies) between are a function of local geometry, that the change in velocity between moving particles can then be described as being due to a gravitomagnetic dragging effect, and at that point we get a relativistic acoustic metric rather than a Minkowski metric, and a different set of equations of motion.
In that context, the "tangent" arguments and "zooming" arguments just aren't that relevant to physics. I know that mathematicians enjoy using those methods, but this doesn't make them physically meaningful.
> > Classical Hawking radiation is a consequence of the
> > alternative equation-set, but is impossible with the
> > SR equations applied to gravity. So with the other set
> > we get signal leakage, while with SR-based GR, we get
> > a silent "Wheeler" black hole and conflicts with QM.
>
> This is really a separate subject, but the very prediction of Hawking radiation arises from the combination of general relativity and quantum field theory, so it's ridiculous to claim that Hawking radiation, per se, "is impossible" under either general relativity or quantum field theory.
It's impossible under current general relativity, without externally-applied fudges (like an artificial, retrofitted, quantum field theory overlay). Hawking radiation requires a causal structure and definitions that disagree with those of current GR. The causal definitions of quantum theory (before you get to the quantum bit!) and those of SR-based GR are philosophically different. As Einstein explained to Heisenberg, according to SR's logic (which SR-based GR inherits) "what's measured defines what's real", whereas with QM's logic, "what's real defines what's measured". Heisenberg had originally expected to try to give QM the same definitions as SR, but Einstein corrected him and said that his own SR approach shouldn't be applied to QM, because the approach was "nonsense".
Heisenberg credited the conversation with having inspired him to come up with the uncertainty principle.
> Analog models with acoustic metrics mimic some, but not all, of the features of Hawking radiation and gravitation,
Can you suggest one feature of Hawking radiation that doesn't have a counterpart under an acoustic metric? I thought the feature-set seemed to be pretty complete. You even get the occasional additional prediction that the QM guys appear not to have made yet. :)
Also consider that cosmological Hawking radiation – a description of how information leaks through a cosmological horizon and how the horizon has a non-zero temperature – is typically described entirely within the classical domain, using a non-SR Doppler shift relationship for cosmological recession and cosmological redshift.
An observerspace description of the appearance of a cosmological horizon, using classical physics, would seem to have to be a feature-complete manifestation of the statistical mechanics of quantum field theory. Because that horizon behaviour is presumably physically real and actually out there, it would seem to have to obey statistical laws, and we know that a cosmological horizon is an acoustic horizon, whose physics obeys the rules of acoustic metrics.
So with the cosmological horizon as a physical system that appears to have to obey both sets of laws, it's actually quite difficult for an acoustic metric NOT to have a 100% (or nearly 100%) correspondence to quantum field theory.
...
> but do not represent a viable realistic model of the phenomenon, unless modified and refined to the point that it becomes general relativity combined with quantum field theory.
If a cosmological horizon //in real life// has to obey both acoustic metric rules of physics and quantum field theory statistics, then the acoustic metric behaviour has to generate the QFT statistics in a "viable and realistic" manner. Because we're no longer talking about mathematical abstraction, we're talking about something that is thought to be real physical behaviour.
If we take general relativity and delete all the SR-based content, then the mechanism that we get for local lightspeed constancy, where light passed between relatively moving particles only shows local c-constancy, is essentially a GR-style description of gravitomagnetic lightspeed regulation. The resulting physics then obeys acoustic metric rules rather than Minkowski spacetime rules, the nominal shift relationships become redder than SR because of the additional gravitomagnetic curvature, the system becomes more nonlinear, the observerspace logic corresponds to the definition that Einstein gave Heisenberg for QM instead of the version he used for SR, we get indirect radiation through a gravitational horizon, the horizons become "effective" horizons rather than "event" horizons, particles get "bumped" out through the horizon along non-inertial paths, a naive back-extrapolation of the escaping particles' trajectories that ignores the acceleration then gives an artificial description in which the particles were created outside the horizon as particle-pair production events, and we have the "popular" 1970s QM-based description of Hawking radiation, from purely classical principles.
This modified version of general relativity does not then need to be "combined" with quantum field theory, because it's already producing the same basic physical behaviours. The two descriptions become dual descriptions of the same physics.
The only really difficult thing here is the psychological wrench of "letting go" of the safety-net of special relativity's comfortingly flat geometry, and fully embracing the vertiginous concept of a physics entirely based on curved-spacetime principles. We're used to breaking big problems down into smaller stages, and then trying to fit those fairly self-contained parts together to build a bigger picture. With this system you can't use that incremental approach so much, you have to find the common principles that work across multiple fields of physics, and allow an entire structure to emerge naturally from those principles, more or less as a single piece.
Once you have the structure right, the rest kinda follows.
Eric Baird
https://www.researchgate.net/publication/316981511_When_a_black_hole_moves_The_incompatibility_between_gravitational_theory_and_special_relativity