On 20/07/2022 18:23, Nicolaas Vroom wrote:
> The concept 'now' is a universal concept. All the events, that are
> happening now, anywhere in the universe, are happening simultaneous, now.
Simultaneous events or "now" is only well defined for events that are
happening at the same x,y,z coordinates in spacetime.
Observing physically separated events the times that you observe for
each event depend on your speed relative to those events own reference
frame. We are not used to travelling fast enough for this to matter but
at relativistic speeds the effect cannot be ignored.
ISTR you can establish a frame of reference for clock time fairly easily
by moving clocks synchronised at your reference point of origin out into
the universe at a walking pace (where relativistic corrections can be
conveniently ignored). The errors made can be made arbitrarily small by
moving them more slowly to their final position.
Not ideal experimentally but it would get the job done (eventually).
>
> [[Mod. note -- Related issues are discussed a lot in the classic book
>
> P. W. Bridgman
> "A Sophisticate's Primer of Relativity", 2nd edition
> Wesleyan University Press (Distributed by Harper & Row)
> 1962, 1983
> ISBN 0-8195-6078-2 (paperback)
>
> He goes into a lot of discussion on what we *mean* by words like
> "event" and "time" and "frame of reference".
> -- jt]]
This reminds me of an introductory Relativity textbook from my youth
that I read in the school library circa 1976. I cannot remember its name
but it contained a wonderful graph of the speed of light with error bars
from the initial efforts of Romer right through to present day.
It was notable because it showed how as newer more precise refined
techniques became possible the error bars narrowed and number of digits
precision increased. But that for one notable period I think in the
1960's the accepted value was several sigma away from the true value
because a famous experimentalist had applied the correction for an
imperfect vacuum in the wrong sense (and everybody subsequently did the
same). NBS nailed the value down in 1972 so I think that puts bounds on
the date of publication. I have tried and failed to find this book.
It was only when a new technique took over that the mistake was
discovered. Does anybody recognise the book from this description?
Or failing that able to point me to such a graph of speed of light in
vacuum c with error bars as a function of time since the 1600's. This is
the closest I have been able to find with the obvious search terms.
https://interestingengineering.com/a-brief-history-of-the-speed-of-light
Sadly it lacks the all important graph...
--
Regards,
Martin Brown
[[Mod. note -- I remember a paper in the American J of Physics around
20 or 30 (?) years ago on a similar theme, looking at historical data
for the CODATA fundamental constants and how they had changed over time.
As I (dimly) recall, the conclusion was that experimenters typically
had larger error bars than they actually thought, on average by a factor
of O(1.4) or so. Alas, I've been unable to find that paper any time in
the past decade. :(
One can see similar effects in historical estimates of the Hubble constant.
Here I can actually give a reference ((locates book on bookshelf and opens
it to a bookmark)):
William H Press,
"Understanding Data Better with Bayesian and Global Statistical Methods"
chapter 3 (pp 49-60) in
John N Bahcall & Jeremiah P Ostriker, Eds,
"Unsolved Problems in Astrophysics"
Princeton U.P. 1977
ISBN-10 0-691-01607-0 (hc) or 0-691-01606-2 (pb)
Press considers the problem of how to combine multiple estimates of what
should be the same quantity, which might have systematic errors (which
he models in a Bayesian sense by multiplying the claimed error bars by
some factor > 1). He derives a Bayesian method to simultaneously estimate
the true value and the parameters of the systematic-error model. He
demonstrates the method on a dataset of 13 published Hubble-constant
measurements (ranging from 45 to 87 km/sec/Mpc). He shows a graph of
the posterior distribution for H0, with a 95% CI of 74 +/- 8 , with most
of the published H0 measurements having an ~75% chance of being "correct"
(correct erorr bars) and ~25% chance of having much larger error bars.
Returning to what Martin Brown wrote about "correlated experimental
errors" (where experimenter #1 makes a mistake, and then experimenters
#2-#N all follow), some experimental groups go to great lengths to do all
their analyses "blind" to avoid just such problems. For example,
https://arstechnica.com/science/2019/09/physics-not-broken-after-all-were-clo
se-to-resolving-proton-radius-puzzle/
includes the description
> [[the experimenters]] deliberately made a blind measurement to ensure
> against any bias, finally revealing the value they had measured over
> eight years just a few weeks prior to submitting their paper for
> publication. "The difficulty is making sure we're not influenced by
> anything that could complicate or shift energy states in our measurement,"
> said group leader Eric Hessels
> <
https://www.physics.yorku.ca/faculty-profiles/hessels-eric/>. "A lot of
> the eight years [were] spent taking great care in understanding all aspects
> of the measurement so that we can carefully eliminate possibilities of
> having made mistakes."
Similarly, before announcing the first direct detection of gravitational
waves (an event detected in 2015, announced in early 2016), the LIGO
Science Collaboration did multiple "blind injections" wherein a small
subgroup (of the ~1000-member collaboration) would deliberately "inject"
a simulated binary-black-hole-coalescence signal into the LIGO data
stream, as a check on how the rest of the collaboration did at finding
it. The "blind" is that only the injection subgroup knew precisely
where in the data stream the blind injections were, or how many blind
injections there were. The rest of the LSC analyized the data stream
blind, not knowing whether any given event might be real or might be
a blind injection. In one famous case the LSC got as far as writing
a Nature paper before "unblinded" and learning this event was in fact
a blind injection. Harry Collins (a sociologist who studies the processes
of scientific research, particularily in gravitational-wave detection)
has written a book about this blind injection and the shifts in opinion
within the analysis groups,
Harry Collins
"Gravity's Ghost and Big Dog:
Scientific Discovery and Social Analysis in the Twenty-First Century"
U Chicago Press, 2011
paperback ISBN-13 978-0-226-05229-8, e-book 978-0-226-05232-8
Of course, experimentalists aren't the only ones vulnerable to "following
the crowd". Theoreticians and computational researchers can be just as
vulnerable.
Around 2006-2007 a friend of mine organized a collaboration of all the
major research groups in the world working on "numerical relativity"
simulations of the gravitational waves from the decay and merger of
orbiting binary black hole systems, to inter-compare their results.
Here each group's calculation involves a large custom-written computer
code of ~100K to ~500K lines of mostly C/C++/Fortran 90, with some
important bits in Mathematica/Maple, months of supercomputer time, and
extensive data postprocessing, so one worries a lot about software bugs
as well as "groupthink". I was very impressed when they published a
joint paper [arXiv:0901.2437 = Phys Rev D79, 084025 (2009)] showing the
results from five different groups' calculations (based on different
formulations of the Einstein equations & different independently-written
computer codes), all agreeing beautifully to within their claimed error
bars!
And just last month I presented some research of mine at a conference,
and explained in my talk that a key reason why people should be interested
in <<method I used>> even though it's more expensive, less accurate, and
in some ways harder to do than <<method many other people use>> is that
we need independent computations to validate against each other before we
can really trust our results.
-- jt]]