356 views

Skip to first unread message

Jun 21, 2021, 4:01:07 AM6/21/21

to

20-JUN-2021

hello -

I find a maximum value for the Lorentz gamma factor,

gamma = ((1-((v)^2/c^2))^(1/2))^-1 = 54794158.005943767726,

for a relative velocity v = 299792457.99999997 m/s.

For an electron with mass m_e = 510998.91 ev/c^2 and momentum p_e=m_ev

the max velocity is v_e = p_e/m_e = 299792457.9999999404 m/s.

Plugging v_e into the gamma equation yields the same gamma max.

Computing a higher velocity past the eighth decimal place does

not change the gamma value either; until it blows up as gamma = inf.

Is there a good turn of phrase to explain this limit?

Cheers,

mj horn

[[Mod. note -- I think "floating-point rounding errors" is the phrase

you're looking for. If v/c is very close to 1, then the formula for

gamma tends to be very sensitive to rounding errors, causing the sorts

of anomolous behavior you noticed.

The computation can be reorganized to be less sensitive to rounding

errors, but the easy solution is to just use brute force, i.e., use

higher precision in the computation. For example, software systems

such as Sage, Maple, and Mathematica can all easily do computations

in higher precision than standard C "double" (which typically gives

about 16-digit accuracy). For example, in Sage:

sage: gamma(v_over_c) = 1/sqrt(1 - v_over_c^2)

sage: gamma(1 - 1/(10**20))

100000000000000000000/199999999999999999999*sqrt(199999999999999999999)

sage: n(gamma(1 - 1/(10**20)), digits=50)

7.0710678118654752440261212905781540809584467771981e9

sage:

As to what relevance this has for *physics*: the current record for the

highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

to v/c of over 1 - 10**-40.

-- jt]]

hello -

I find a maximum value for the Lorentz gamma factor,

gamma = ((1-((v)^2/c^2))^(1/2))^-1 = 54794158.005943767726,

for a relative velocity v = 299792457.99999997 m/s.

For an electron with mass m_e = 510998.91 ev/c^2 and momentum p_e=m_ev

the max velocity is v_e = p_e/m_e = 299792457.9999999404 m/s.

Plugging v_e into the gamma equation yields the same gamma max.

Computing a higher velocity past the eighth decimal place does

not change the gamma value either; until it blows up as gamma = inf.

Is there a good turn of phrase to explain this limit?

Cheers,

mj horn

[[Mod. note -- I think "floating-point rounding errors" is the phrase

you're looking for. If v/c is very close to 1, then the formula for

gamma tends to be very sensitive to rounding errors, causing the sorts

of anomolous behavior you noticed.

The computation can be reorganized to be less sensitive to rounding

errors, but the easy solution is to just use brute force, i.e., use

higher precision in the computation. For example, software systems

such as Sage, Maple, and Mathematica can all easily do computations

in higher precision than standard C "double" (which typically gives

about 16-digit accuracy). For example, in Sage:

sage: gamma(v_over_c) = 1/sqrt(1 - v_over_c^2)

sage: gamma(1 - 1/(10**20))

100000000000000000000/199999999999999999999*sqrt(199999999999999999999)

sage: n(gamma(1 - 1/(10**20)), digits=50)

7.0710678118654752440261212905781540809584467771981e9

sage:

As to what relevance this has for *physics*: the current record for the

highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

to v/c of over 1 - 10**-40.

-- jt]]

Jun 21, 2021, 5:48:17 AM6/21/21

to

mark horn <toadast...@gmail.com> schrieb:

> [[Mod. note -- I think "floating-point rounding errors" is the phrase

> you're looking for. If v/c is very close to 1, then the formula for

> gamma tends to be very sensitive to rounding errors, causing the sorts

> of anomolous behavior you noticed.

I would suggest that everybody who does floating-point calculations

should read the famous "Goldberg paper", "What Every Computer

Scientist Should Know About Floating-Point Arithmetic" available

from https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

(it is not absolutely necessary to follow the proofs as a user).

> [[Mod. note -- I think "floating-point rounding errors" is the phrase

> you're looking for. If v/c is very close to 1, then the formula for

> gamma tends to be very sensitive to rounding errors, causing the sorts

> of anomolous behavior you noticed.

should read the famous "Goldberg paper", "What Every Computer

Scientist Should Know About Floating-Point Arithmetic" available

from https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

(it is not absolutely necessary to follow the proofs as a user).

Jun 21, 2021, 10:32:05 AM6/21/21

to

Thank you. Excellent reference to have, as I plod forward with the machine on my back.

Best, m

Jun 21, 2021, 10:32:05 AM6/21/21

to

On Monday, June 21, 2021 at 4:01:07 AM UTC-4, mark horn wrote:

> 20-JUN-2021

> hello -

>

> I find a maximum value for the Lorentz gamma factor,

> gamma = ((1-((v)^2/c^2))^(1/2))^-1 = 54794158.005943767726,

> for a relative velocity v = 299792457.99999997 m/s.

> For an electron with mass m_e = 510998.91 ev/c^2 and momentum p_e=m_ev

> the max velocity is v_e = p_e/m_e = 299792457.9999999404 m/s.

> Plugging v_e into the gamma equation yields the same gamma max.

> Computing a higher velocity past the eighth decimal place does

> not change the gamma value either; until it blows up as gamma = inf.

>

> Is there a good turn of phrase to explain this limit?

>

> Cheers,

> mj horn

>

> 20-JUN-2021

> hello -

>

> I find a maximum value for the Lorentz gamma factor,

> gamma = ((1-((v)^2/c^2))^(1/2))^-1 = 54794158.005943767726,

> for a relative velocity v = 299792457.99999997 m/s.

> For an electron with mass m_e = 510998.91 ev/c^2 and momentum p_e=m_ev

> the max velocity is v_e = p_e/m_e = 299792457.9999999404 m/s.

> Plugging v_e into the gamma equation yields the same gamma max.

> Computing a higher velocity past the eighth decimal place does

> not change the gamma value either; until it blows up as gamma = inf.

>

> Is there a good turn of phrase to explain this limit?

>

> Cheers,

> mj horn

>

> [[Mod. note -- I think "floating-point rounding errors" is the phrase

> you're looking for. If v/c is very close to 1, then the formula for

> gamma tends to be very sensitive to rounding errors, causing the sorts

> of anomolous behavior you noticed.

>

> you're looking for. If v/c is very close to 1, then the formula for

> gamma tends to be very sensitive to rounding errors, causing the sorts

> of anomolous behavior you noticed.

>

> The computation can be reorganized to be less sensitive to rounding

> errors, but the easy solution is to just use brute force, i.e., use

> higher precision in the computation. For example, software systems

> such as Sage, Maple, and Mathematica can all easily do computations

> in higher precision than standard C "double" (which typically gives

> about 16-digit accuracy). For example, in Sage:

>

> sage: gamma(v_over_c) = 1/sqrt(1 - v_over_c^2)

> sage: gamma(1 - 1/(10**20))

> 100000000000000000000/199999999999999999999*sqrt(199999999999999999999)

> sage: n(gamma(1 - 1/(10**20)), digits=50)

> 7.0710678118654752440261212905781540809584467771981e9

> sage:

>

> As to what relevance this has for *physics*: the current record for the

> highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

> to v/c of over 1 - 10**-40.

> -- jt]]

21-JUN-2021
> errors, but the easy solution is to just use brute force, i.e., use

> higher precision in the computation. For example, software systems

> such as Sage, Maple, and Mathematica can all easily do computations

> in higher precision than standard C "double" (which typically gives

> about 16-digit accuracy). For example, in Sage:

>

> sage: gamma(v_over_c) = 1/sqrt(1 - v_over_c^2)

> sage: gamma(1 - 1/(10**20))

> 100000000000000000000/199999999999999999999*sqrt(199999999999999999999)

> sage: n(gamma(1 - 1/(10**20)), digits=50)

> 7.0710678118654752440261212905781540809584467771981e9

> sage:

>

> As to what relevance this has for *physics*: the current record for the

> highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

> to v/c of over 1 - 10**-40.

> -- jt]]

Thanks so much.

I'll chalk up another one to the unreasonable effectiveness

of my ignorance to lead me astray.

thanks again, m

Jun 21, 2021, 5:10:51 PM6/21/21

to

On 6/21/21 3:01 AM, mark horn wrote:

> [...]

For extended-precision arithmetic, rather than "Sage, Maple, and

Mathematica", people may find it easier to use Python and its decimal

module. It provides arbitrary-precision decimal arithmetic using

standard Python arithmetic operators and functions. It defaults to 28

significant digits, well beyond double-precision floating point.

>>> from decimal import *

>>> two = Decimal(2)

>>> Decimal.sqrt(two)

Decimal('1.414213562373095048801688724')

>>> getcontext().prec = 50

>>> Decimal.sqrt(two)

Decimal('1.4142135623730950488016887242096980785696718753769')

Tom Roberts

> [...]

For extended-precision arithmetic, rather than "Sage, Maple, and

Mathematica", people may find it easier to use Python and its decimal

module. It provides arbitrary-precision decimal arithmetic using

standard Python arithmetic operators and functions. It defaults to 28

significant digits, well beyond double-precision floating point.

>>> from decimal import *

>>> two = Decimal(2)

>>> Decimal.sqrt(two)

Decimal('1.414213562373095048801688724')

>>> getcontext().prec = 50

>>> Decimal.sqrt(two)

Decimal('1.4142135623730950488016887242096980785696718753769')

Tom Roberts

Jun 22, 2021, 2:14:15 AM6/22/21

to

Tom Roberts <tjrobe...@sbcglobal.net> schrieb:

language, Fortran.

Fortran lets you declare a real variable with at least

n valid decimal digits, and common compilers (ifort and

gfortran, among others) allow up to IEEE 128 bit numbers,

with 33 valid digits. This is described in Michael

Metcalf's Wikiedia article on Fortran 95 features under

https://en.wikipedia.org/wiki/Fortran_95_language_features#REAL .

> On 6/21/21 3:01 AM, mark horn wrote:

>> [...]

>

> For extended-precision arithmetic, rather than "Sage, Maple, and

> Mathematica", people may find it easier to use Python and its decimal

> module.

Alternatively, you can use the oldest scientific programming
>> [...]

>

> For extended-precision arithmetic, rather than "Sage, Maple, and

> Mathematica", people may find it easier to use Python and its decimal

> module.

language, Fortran.

Fortran lets you declare a real variable with at least

n valid decimal digits, and common compilers (ifort and

gfortran, among others) allow up to IEEE 128 bit numbers,

with 33 valid digits. This is described in Michael

Metcalf's Wikiedia article on Fortran 95 features under

https://en.wikipedia.org/wiki/Fortran_95_language_features#REAL .

Jun 26, 2021, 6:36:53 AM6/26/21

to

days when Einstein and Feynman were still alive) that with those

declared variables you can actually compute things in a fast way,

since Fortran is a compiled language while Python is interpreted.

(Apparently Thomas thinks the first advantage should already be

decisive, and perhaps it should, but just for completeness..)

--

Jos

[Moderator's note: Einstein died in 1955, while the first version of

Fortran was released in 1957. So not quite, though Fortran development

might have already started while Einstein was still just alive.

Interesting that you mention Feynman of all people. At Los Alamos, he

developed a sort of human compiler where complex tasks were executed by

noting something on a card and passing it to another person who

calculated something on a mechanical calculator, added the result, and

passed it on to another person, and so on. In principle, one could

implement a Fortran compiler this way, as the standard doesn't specify

how the compiler actually has to be built, though in practice it is

always on an electronic digital computer. Fortran, which has seen

several updates since 1957, is still widely used for scientific

computing. --P.H.]

Jun 26, 2021, 12:17:30 PM6/26/21

to

Op maandag 21 juni 2021 om 10:01:07 UTC+2 schreef toadast...@gmail.com:

> 20-JUN-2021

> For an electron with mass m_e = 510998.91 ev/c^2 and momentum p_e=m_ev

> the max velocity is v_e = p_e/m_e = 299792457.9999999404 m/s.

That calculation is simple, but that is not the issue.

How do you know that m_e = 510998.91 ev/c2 and

how do you know that p_e = m_e*v = 15319361264220,749544464964

How are both calculated i.e measured by means of experiment.

> As to what relevance this has for *physics*: the current record for the

> highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

> to v/c of over 1 - 10**-40.

>-- jt]]

The physics behind this question is very important, because how do you know

that the speed cosmic ray is almost the same as the speed of light

but smaller and not larger?

What I want to know is how are both speeds established?

Ofcourse you could claim that the speed of light is constant.

In that case you get a new question:

How do you know that the distance of a photon travelled in 1 second

is slightly more than the distance of the cosmic ray

i.e. 299792458.00000000 m versus 299792457,99999999999700207542 m

using v/c = 1 - 10**-20

How is that established by means of experiment? or observation?

IMO this is difficult.

It means that if both signals are emitted simultaneously

the photon should arrive before the cosmic ray

Maybe this document gives a glue, because it mentions Supernova 1987A

https://www.abc.net.au/science/articles/2011/11/25/3376138.htm

Neutrino's win but Einstein has not lost yet

"but the supernova observations showed that neutrinos and photons

generated at the same time by the supernova, 160 thousand light years

away, arrived here at the same time as well."

But if they arrive simultaneously here how do you know that

they were emitted simultaneously overthere?

Nicolaas Vroom

http://users.pandora.be/nicvroom/

> 20-JUN-2021

>

> I find a maximum value for the Lorentz gamma factor,

> gamma = ((1-((v)^2/c^2))^(1/2))^-1 = 54794158.005943767726,

> for a relative velocity v = 299792457.99999997 m/s.

That calculation is 'correct' using c = 299792458.00000000 m/s
> I find a maximum value for the Lorentz gamma factor,

> gamma = ((1-((v)^2/c^2))^(1/2))^-1 = 54794158.005943767726,

> for a relative velocity v = 299792457.99999997 m/s.

> For an electron with mass m_e = 510998.91 ev/c^2 and momentum p_e=m_ev

> the max velocity is v_e = p_e/m_e = 299792457.9999999404 m/s.

How do you know that m_e = 510998.91 ev/c2 and

how do you know that p_e = m_e*v = 15319361264220,749544464964

How are both calculated i.e measured by means of experiment.

> As to what relevance this has for *physics*: the current record for the

> highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

> to v/c of over 1 - 10**-40.

>-- jt]]

that the speed cosmic ray is almost the same as the speed of light

but smaller and not larger?

What I want to know is how are both speeds established?

Ofcourse you could claim that the speed of light is constant.

In that case you get a new question:

How do you know that the distance of a photon travelled in 1 second

is slightly more than the distance of the cosmic ray

i.e. 299792458.00000000 m versus 299792457,99999999999700207542 m

using v/c = 1 - 10**-20

How is that established by means of experiment? or observation?

IMO this is difficult.

It means that if both signals are emitted simultaneously

the photon should arrive before the cosmic ray

Maybe this document gives a glue, because it mentions Supernova 1987A

https://www.abc.net.au/science/articles/2011/11/25/3376138.htm

Neutrino's win but Einstein has not lost yet

"but the supernova observations showed that neutrinos and photons

generated at the same time by the supernova, 160 thousand light years

away, arrived here at the same time as well."

But if they arrive simultaneously here how do you know that

they were emitted simultaneously overthere?

Nicolaas Vroom

http://users.pandora.be/nicvroom/

Jun 26, 2021, 4:07:42 PM6/26/21

to

Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> Ofcourse you could claim that the speed of light is constant.

The way that the SI units are defined now, the speed of light

in vacuum is indeed constant. If you measure anything else than

299792458 m/s, recalibrate your measurement devices.

> Ofcourse you could claim that the speed of light is constant.

in vacuum is indeed constant. If you measure anything else than

299792458 m/s, recalibrate your measurement devices.

Jun 26, 2021, 6:57:19 PM6/26/21

to

In article <4062c54b-7af6-4f3a...@googlegroups.com>,

limits on the mass of the neutrinos involved. For some types of

neutrinos, those were (maybe still are) the best limits. A supernova is

not an instantaneous event, and for various reasons the light and

neutrinos are not produced at exactly the same time or, more

importantly, cannot freely travel from the same time, so some

discrepancy is expected. It's not clean enough to use for the type of

test which you have in mind.

Nicolaas Vroom <nicolaa...@pandora.be> writes:

> Maybe this document gives a glue, because it mentions Supernova 1987A

> https://www.abc.net.au/science/articles/2011/11/25/3376138.htm

> Neutrino's win but Einstein has not lost yet

>

> "but the supernova observations showed that neutrinos and photons

> generated at the same time by the supernova, 160 thousand light years

> away, arrived here at the same time as well."

> But if they arrive simultaneously here how do you know that

> they were emitted simultaneously overthere?

The fact that they arrived at roughly the same time sets strong upper
> Maybe this document gives a glue, because it mentions Supernova 1987A

> https://www.abc.net.au/science/articles/2011/11/25/3376138.htm

> Neutrino's win but Einstein has not lost yet

>

> "but the supernova observations showed that neutrinos and photons

> generated at the same time by the supernova, 160 thousand light years

> away, arrived here at the same time as well."

> But if they arrive simultaneously here how do you know that

> they were emitted simultaneously overthere?

limits on the mass of the neutrinos involved. For some types of

neutrinos, those were (maybe still are) the best limits. A supernova is

not an instantaneous event, and for various reasons the light and

neutrinos are not produced at exactly the same time or, more

importantly, cannot freely travel from the same time, so some

discrepancy is expected. It's not clean enough to use for the type of

test which you have in mind.

Jun 26, 2021, 6:57:37 PM6/26/21

to

In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

The speed of light is now a defined quantity, that is true. However,

that is merely a practical matter. If the speed of light really were

variable, that could still be detected just as easily as before the

redefinition. Suppose that the speed of light did drop by a measurable

amount. People would not immediately redefine the length of everything

because of that.

Many other SI units were recently redefined, so they are "exact" in that

sense. The same caveats apply.

that is merely a practical matter. If the speed of light really were

variable, that could still be detected just as easily as before the

redefinition. Suppose that the speed of light did drop by a measurable

amount. People would not immediately redefine the length of everything

because of that.

Many other SI units were recently redefined, so they are "exact" in that

sense. The same caveats apply.

Jun 27, 2021, 3:38:59 AM6/27/21

to

On 6/26/21 5:36 AM, Jos Bergervoet wrote:

> [...] with those declared variables you can actually compute things

programmers who don't understand how to use Python [#]. Modern

software development involves using libraries rather than coding stuff

yourself. Python libraries numpy and scipy are every bit as fast as

FORTRAN when using arrays for large computations. When not using arrays,

and for small computations, Python is faster than humans, which is

usually all that matters.

[#] Hint: if you ever loop over the elements of an array

in Python, you are probably doing it wrong, or at least

very inefficiently. (Does not apply to small lists.)

The real win for Python, however, is in improved programmer productivity

compared to Fortran. As any serious programmer learns the basics of a

new language in a day or two, this improved productivity applies even if

you don't already know Python. (Except for tiny, one-off projects.)

There are situations where languages like Fortran or C++ are

appropriate, mostly when dealing with legacy libraries, or building a

large software edifice.

Tom Roberts

> [...] with those declared variables you can actually compute things

> in a fast way, since Fortran is a compiled language while Python is

> interpreted.

While true, this is a red herring, except for very-old-school
> interpreted.

programmers who don't understand how to use Python [#]. Modern

software development involves using libraries rather than coding stuff

yourself. Python libraries numpy and scipy are every bit as fast as

FORTRAN when using arrays for large computations. When not using arrays,

and for small computations, Python is faster than humans, which is

usually all that matters.

[#] Hint: if you ever loop over the elements of an array

in Python, you are probably doing it wrong, or at least

very inefficiently. (Does not apply to small lists.)

The real win for Python, however, is in improved programmer productivity

compared to Fortran. As any serious programmer learns the basics of a

new language in a day or two, this improved productivity applies even if

you don't already know Python. (Except for tiny, one-off projects.)

There are situations where languages like Fortran or C++ are

appropriate, mostly when dealing with legacy libraries, or building a

large software edifice.

Tom Roberts

Jun 27, 2021, 8:25:49 AM6/27/21

to

On 21/06/27 9:38 AM, Tom Roberts wrote:

> On 6/26/21 5:36 AM, Jos Bergervoet wrote:

>> [...] with those declared variables you can actually compute things

>> in a fast way, since Fortran is a compiled language while Python is

>> interpreted.

>

> While true,

Thank you for the confirmation. (As I wrote, I mentioned this advantage
> On 6/26/21 5:36 AM, Jos Bergervoet wrote:

>> [...] with those declared variables you can actually compute things

>> in a fast way, since Fortran is a compiled language while Python is

>> interpreted.

>

> While true,

merely for completeness.. It does, however, seem to raise some deeper

questions.)

> this is a red herring, except for very-old-school

> programmers who don't understand how to use Python [#]. Modern

> software development involves using libraries rather than coding stuff

> yourself.

software development is any better than this old-school variety where

"stuff" is actually coded? (We should restrict the question to

applications in physics of course, to keep it somewhat on-topic..)

> Python libraries numpy and scipy are every bit as fast as

> FORTRAN when using arrays for large computations.

(in computational physics) will only require combining things that have

already been programmed? We know that advances in mathematics are often

driven by physics. Wouldn't that also apply to numerical mathematics,

requiring us to go beyond what is in any existing library?

How did, for instance, the development of symplectic integrators go?

(NB: I'm purely asking out of curiosity, I don't claim that this proves

anything I said, it just seems to be a relevant, recent development.)

If I look for discussions about actual code, I see various languages

being used, it surely also includes Fortran.

<https://stackoverflow.com/questions/3680136/help-with-symplectic-integrators>

> ...

> The real win for Python, however, is in improved programmer productivity

OK, now you jump to another argument, probably more suited (regardless
whether it's true) for another subthread, as it would after all be

appropriate, in these matters, to keep things structured..

--

Jos

Jun 27, 2021, 11:32:48 PM6/27/21

to

Tom Roberts <tjrobe...@sbcglobal.net> schrieb:

> The real win for Python, however, is in improved programmer productivity

> compared to Fortran.

I would like to refer everybody to

http://blog.khinsen.net/posts/2017/11/16/a-plea-for-stability-in-the-scipy-ecosystem/

and the follow-up

http://blog.khinsen.net/posts/2017/11/22/stability-in-the-scipy-ecosystem-a-summary-of-the-discussion/

when evaluating at that statement. The central case that Hinsen

makes is the poor reproducibility of results due to frequent

changes in the underlying software. And, let's face it, poor

reproducibility is just about the worst thing that can happen when

using the scientific method.

[[Mod. note -- Thanks for posting these links -- they are indeed well

worth reading by s.p.r readers who do software development for any but

the most transient of purposes.

The widespread use of "notebooks" (e.g., Jupyter) often makes things

a lot worse, by encouraging people to ignore much of what we've learned

over the past 50 years about good software-engineering practice (e.g.,

hidden dependence on global variables can be dangerous). Joel Grus's

discussion

https://docs.google.com/presentation/d/1n2RlMdmv1p25Xy5thJUhkKGvjtV-dkAIsUXP-AL4ffI/edit#slide=id.g362da58057_0_1

is very interesting, and has some useful best-practices guidelines to

help avoid trouble when using notebooks.

Leslie Hatton's "T experiments"

http://kar.kent.ac.uk/21557/1/THE_T-EXPERIMENTS_ERRORS_IN.pdf

found serious software design flaws and lack-of-reproducibility in a

range of geophysics data analysis software (with suggestions that the

problem was much wider than just that field).

In general, my impression is that many physicists are reasonably

competent programmers-in-the-small who know little of software engineering

and programming-in-the-large. :(

On the positive side, well-designed software that takes backwards

compatability seriously can remain useful for a very long time, even while

evolving to support newer environments and provide expanded functionality.

Fortran is a great example; I regularly use some 1990s-vintage Fortran

subroutine libraries which remain valid and useful today. Perl is another

good example; this past month I've made a lot of use of a Perl program

which I originally wrote in 1997 (and haven't modified since then); it

still works fine using a modern Perl.

In contrast, the Python 2 to Python 3 transition was long, painful, not

backwards-compatible, and not even "there's an automated tool that can

migrate my code". The required changes were sometimes mechanical, but

sometimes required nontrivial thought by someone who understood the code.

Ouch. There's a great lessons-learned discussion (from someone who headed

the Python 2 to Python 3 migration for the /Mercurial/ project) at

https://gregoryszorc.com/blog/2020/01/13/mercurial's-journey-to-and-reflections-on-python-3/

-- jt]]

> The real win for Python, however, is in improved programmer productivity

I would like to refer everybody to

http://blog.khinsen.net/posts/2017/11/16/a-plea-for-stability-in-the-scipy-ecosystem/

and the follow-up

http://blog.khinsen.net/posts/2017/11/22/stability-in-the-scipy-ecosystem-a-summary-of-the-discussion/

when evaluating at that statement. The central case that Hinsen

makes is the poor reproducibility of results due to frequent

changes in the underlying software. And, let's face it, poor

reproducibility is just about the worst thing that can happen when

using the scientific method.

[[Mod. note -- Thanks for posting these links -- they are indeed well

worth reading by s.p.r readers who do software development for any but

the most transient of purposes.

The widespread use of "notebooks" (e.g., Jupyter) often makes things

a lot worse, by encouraging people to ignore much of what we've learned

over the past 50 years about good software-engineering practice (e.g.,

hidden dependence on global variables can be dangerous). Joel Grus's

discussion

https://docs.google.com/presentation/d/1n2RlMdmv1p25Xy5thJUhkKGvjtV-dkAIsUXP-AL4ffI/edit#slide=id.g362da58057_0_1

is very interesting, and has some useful best-practices guidelines to

help avoid trouble when using notebooks.

Leslie Hatton's "T experiments"

http://kar.kent.ac.uk/21557/1/THE_T-EXPERIMENTS_ERRORS_IN.pdf

found serious software design flaws and lack-of-reproducibility in a

range of geophysics data analysis software (with suggestions that the

problem was much wider than just that field).

In general, my impression is that many physicists are reasonably

competent programmers-in-the-small who know little of software engineering

and programming-in-the-large. :(

On the positive side, well-designed software that takes backwards

compatability seriously can remain useful for a very long time, even while

evolving to support newer environments and provide expanded functionality.

Fortran is a great example; I regularly use some 1990s-vintage Fortran

subroutine libraries which remain valid and useful today. Perl is another

good example; this past month I've made a lot of use of a Perl program

which I originally wrote in 1997 (and haven't modified since then); it

still works fine using a modern Perl.

In contrast, the Python 2 to Python 3 transition was long, painful, not

backwards-compatible, and not even "there's an automated tool that can

migrate my code". The required changes were sometimes mechanical, but

sometimes required nontrivial thought by someone who understood the code.

Ouch. There's a great lessons-learned discussion (from someone who headed

the Python 2 to Python 3 migration for the /Mercurial/ project) at

https://gregoryszorc.com/blog/2020/01/13/mercurial's-journey-to-and-reflections-on-python-3/

-- jt]]

Jun 28, 2021, 9:28:03 AM6/28/21

to

Op zaterdag 26 juni 2021 om 22:07:42 UTC+2 schreef Thomas Koenig:

That is the problem.

The issue is: I have to measure the speed of light and

I have to measure the speed of an electron v_e or of a cosmic ray.

The question is how do you do that in an uniform way in both cases so that

other people can repeat these experiments?

The result for the gamma ray should be in the order of 299792457,999999999997

m/sec. That means the experiments should be done very accurate.

The speed of light should be 299792458 m/sec in vacuum.

But suppose that I can not measure the speed of light in a vacuum and

result I get case (1) 299792457,9 m/sec or case (2) 299792458,1 m/sec

Both results give me a headache, because they are wrong.

In case (1) the speed is too small and in case (2) too large.

Now suppose that the results my experiments with a gamma ray are

in case (1) 299792457,8995 m/sec or in case (2) 299792458,0995 m/sec

In both cases (1) and (2) ofcourse I can calculate a gamma factor,

but that is simple mathematics.

IMO the most important question to answer is: what is the standard

way to measure the speed of light, of an electron or of a cosmic ray.

A more advanced question is what to do in case the speed of cosmic ray

is higher than the speed of light

i.e 299792458,0995 m/sec versus 299792458 m/sec in vacuum.

Nicolaas Vroom.

The issue is: I have to measure the speed of light and

I have to measure the speed of an electron v_e or of a cosmic ray.

The question is how do you do that in an uniform way in both cases so that

other people can repeat these experiments?

The result for the gamma ray should be in the order of 299792457,999999999997

m/sec. That means the experiments should be done very accurate.

The speed of light should be 299792458 m/sec in vacuum.

But suppose that I can not measure the speed of light in a vacuum and

result I get case (1) 299792457,9 m/sec or case (2) 299792458,1 m/sec

Both results give me a headache, because they are wrong.

In case (1) the speed is too small and in case (2) too large.

Now suppose that the results my experiments with a gamma ray are

in case (1) 299792457,8995 m/sec or in case (2) 299792458,0995 m/sec

In both cases (1) and (2) ofcourse I can calculate a gamma factor,

but that is simple mathematics.

IMO the most important question to answer is: what is the standard

way to measure the speed of light, of an electron or of a cosmic ray.

A more advanced question is what to do in case the speed of cosmic ray

is higher than the speed of light

i.e 299792458,0995 m/sec versus 299792458 m/sec in vacuum.

Nicolaas Vroom.

Jun 29, 2021, 2:41:43 AM6/29/21

to

Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

wrote:

> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

> <tko...@netcologne.de> writes:

>

> > Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> >

> > > Ofcourse you could claim that the speed of light is constant.

> >

> > The way that the SI units are defined now, the speed of light

> > in vacuum is indeed constant. If you measure anything else than

> > 299792458 m/s, recalibrate your measurement devices.

>

> The speed of light is now a defined quantity, that is true. However,

> that is merely a practical matter. If the speed of light really were

> variable, that could still be detected just as easily as before the

> redefinition.

It is incredible how much misunderstanding there is

on such a simple subject.

To clear things up:

The speed of light cannot 'really' be variable.

Why?

In order for the speed of light to be measurable at all

we need to define both a length and a time unit.

Now this can be done in many different ways.

For example, we can define the second on basis of a seconds pendulum,

or on basis of the vibrations of a quartz crystal,

or on basis of some atomic hyperfine transition, or....

With some thought you can figure out

how those definitions scale, when fundamental constants vary.

(different of course, the pendulum has a G in it,

the other second definitions don't)

Likewise for length units, like platinum bars, atomic wavelengths,

stabilised lasers, the AU, etc.)

Now if some, or all of the fundamental constants vary,

so does the measured c, depending on how we set up

the definitions of the length and time units.

(with all kinds of possible answers)

This 'measured' c is not something to do with nature,

it depends on our -human- measurement definitions.

Now it should be obvious to anyone with any sense

of how physics should be done

that the units of time and length

should be chosen in mutually compatible ways.

(so differing by a factor of c)

So, with the right definitions of units c cannot possibly be variable.

Or, saying the same in different words:

c is not really a fundamental constant of nature

in any way that makes -physical- sense.

It is merely a conversion factor between units.

You could just as well ask how 5280 feet/mile

is going to change with the age of the universe,

Jan

wrote:

> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

> <tko...@netcologne.de> writes:

>

> > Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> >

> > > Ofcourse you could claim that the speed of light is constant.

> >

> > The way that the SI units are defined now, the speed of light

> > in vacuum is indeed constant. If you measure anything else than

> > 299792458 m/s, recalibrate your measurement devices.

>

> The speed of light is now a defined quantity, that is true. However,

> that is merely a practical matter. If the speed of light really were

> variable, that could still be detected just as easily as before the

> redefinition.

on such a simple subject.

To clear things up:

The speed of light cannot 'really' be variable.

Why?

In order for the speed of light to be measurable at all

we need to define both a length and a time unit.

Now this can be done in many different ways.

For example, we can define the second on basis of a seconds pendulum,

or on basis of the vibrations of a quartz crystal,

or on basis of some atomic hyperfine transition, or....

With some thought you can figure out

how those definitions scale, when fundamental constants vary.

(different of course, the pendulum has a G in it,

the other second definitions don't)

Likewise for length units, like platinum bars, atomic wavelengths,

stabilised lasers, the AU, etc.)

Now if some, or all of the fundamental constants vary,

so does the measured c, depending on how we set up

the definitions of the length and time units.

(with all kinds of possible answers)

This 'measured' c is not something to do with nature,

it depends on our -human- measurement definitions.

Now it should be obvious to anyone with any sense

of how physics should be done

that the units of time and length

should be chosen in mutually compatible ways.

(so differing by a factor of c)

So, with the right definitions of units c cannot possibly be variable.

Or, saying the same in different words:

c is not really a fundamental constant of nature

in any way that makes -physical- sense.

It is merely a conversion factor between units.

You could just as well ask how 5280 feet/mile

is going to change with the age of the universe,

Jan

Jun 29, 2021, 2:51:03 AM6/29/21

to

Op zondag 27 juni 2021 om 00:57:19 UTC+2 schreef Phillip Helbig (undress to reply):

> > Maybe this document gives a glue, because it mentions Supernova 1987A

> > https://www.abc.net.au/science/articles/2011/11/25/3376138.htm

> > Neutrino's win but Einstein has not lost yet

the tests in order to measure the speed of light, of an electron

and of a gamma ray.

IMO these tests, the description of, involve high uniform accurate

measurements and should be performed in a standard way.

For example, I expect they should all be done in vacuum.

If that is possible, than they can each be compared with the standard speed

of light c in vacuum.

If that is not possible than they at least all should be done under

the same conditions.

But that raises a problem, related to the value of the speed of light,

(not in vacuum) that can be expected and is accepted.

IMO these issues are more important than the calculation of gamma.

I agree with you that part of the problems are related to the

(simultaneous or not) production of these 'particles'

Nicolaas Vroom.

[[Mod. note -- This topic is a bit tricky, because to measure a speed

in meters/second, we need to know what a meter is, and what a second is.

The SI definition of a second is fairly straightforward:

"The second is defined as being equal to the time duration of

9 192 631 770 periods of the radiation corresponding to the

transition between the two hyperfine levels of the fundamental

unperturbed ground-state of the caesium-133 atom."

But the SI definition of a meter (metre if you prefer UK spellings) is

a bit tricker. As of 1983, the SI definition of the meter is

(https://en.wikipedia.org/wiki/Metre#Speed_of_light_definition)

"The metre is the length of the path travelled by light in vacuum

during a time interval of 1/299 792 458 of a second."

So with this definition, the speed of light is necessarily exactly

299 792 458 meters/second. Experiments to "measure the speed of light"

(e.g., by timing a light pulse over a measured distance) are actually

measuring a *length* (in meters). E.g., if your measurement shows

that it takes light 100 nanoseconds to travel a certain distance,

then what you've really done is measure that distance to be

(as 100e-9 seconds * 299 792 458 meters/second) = 29.9792458 meters.

-- jt]]

> In article <4062c54b-7af6-4f3a...@googlegroups.com>,

> Nicolaas Vroom <nicolaa...@pandora.be> writes:

>

> Nicolaas Vroom <nicolaa...@pandora.be> writes:

>

>>> As to what relevance this has for *physics*: the current record for the

>>> highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

>>> to v/c of over 1 - 10**-40.

>>> highest-energy cosmic ray has a gamma factor of over 10**20, corresponding

>>> to v/c of over 1 - 10**-40.

> > The physics behind this question is very important, because how do you know

> > that the speed cosmic ray is almost the same as the speed of light

> > but smaller and not larger?

> > What I want to know is how are both speeds established? (*1)
> > that the speed cosmic ray is almost the same as the speed of light

> > but smaller and not larger?

> > Maybe this document gives a glue, because it mentions Supernova 1987A

> > https://www.abc.net.au/science/articles/2011/11/25/3376138.htm

> > Neutrino's win but Einstein has not lost yet

> The fact that they arrived at roughly the same time sets strong upper

> limits on the mass of the neutrinos involved. For some types of

> neutrinos, those were (maybe still are) the best limits. A supernova is

> not an instantaneous event, and for various reasons the light and

> neutrinos are not produced at exactly the same time or, more

> importantly, cannot freely travel from the same time, so some

> discrepancy is expected.

> It's not clean enough to use for the type of

> test which you have in mind.

That is the question. What I want to know is a clear description of
> limits on the mass of the neutrinos involved. For some types of

> neutrinos, those were (maybe still are) the best limits. A supernova is

> not an instantaneous event, and for various reasons the light and

> neutrinos are not produced at exactly the same time or, more

> importantly, cannot freely travel from the same time, so some

> discrepancy is expected.

> It's not clean enough to use for the type of

> test which you have in mind.

the tests in order to measure the speed of light, of an electron

and of a gamma ray.

IMO these tests, the description of, involve high uniform accurate

measurements and should be performed in a standard way.

For example, I expect they should all be done in vacuum.

If that is possible, than they can each be compared with the standard speed

of light c in vacuum.

If that is not possible than they at least all should be done under

the same conditions.

But that raises a problem, related to the value of the speed of light,

(not in vacuum) that can be expected and is accepted.

IMO these issues are more important than the calculation of gamma.

I agree with you that part of the problems are related to the

(simultaneous or not) production of these 'particles'

Nicolaas Vroom.

[[Mod. note -- This topic is a bit tricky, because to measure a speed

in meters/second, we need to know what a meter is, and what a second is.

The SI definition of a second is fairly straightforward:

"The second is defined as being equal to the time duration of

9 192 631 770 periods of the radiation corresponding to the

transition between the two hyperfine levels of the fundamental

unperturbed ground-state of the caesium-133 atom."

But the SI definition of a meter (metre if you prefer UK spellings) is

a bit tricker. As of 1983, the SI definition of the meter is

(https://en.wikipedia.org/wiki/Metre#Speed_of_light_definition)

"The metre is the length of the path travelled by light in vacuum

during a time interval of 1/299 792 458 of a second."

So with this definition, the speed of light is necessarily exactly

299 792 458 meters/second. Experiments to "measure the speed of light"

(e.g., by timing a light pulse over a measured distance) are actually

measuring a *length* (in meters). E.g., if your measurement shows

that it takes light 100 nanoseconds to travel a certain distance,

then what you've really done is measure that distance to be

(as 100e-9 seconds * 299 792 458 meters/second) = 29.9792458 meters.

-- jt]]

Jun 29, 2021, 10:51:47 AM6/29/21

to

In article <1pbi21w.1t8rob81hgck0aN%nos...@de-ster.demon.nl>,

> To clear things up:

> The speed of light cannot 'really' be variable.

It is true that the metre is now defined as the distance travelled by

light in a certain time and thus by definition the speed of light is

constant. However, the metre (and the second) used to be defined

differently than they are now. Back then, it was certainly possible, in

principle, to detect a change in the speed of light. One could perform

the same experiment today. Nature doesn't know what the current SI

definitions are.

The metre is now defined as it is a) as a practical matter and b)

because we assume that the speed of light is constant. If the speed of

light did change, i.e. of one does observations like those of RÃ¸mer,

Fizeau, etc., using, say, pendulum clocks as a reference, and notice

that it changes, then one has measured the change. The consequence

would not be to point to the SI definition and say that it cannot

change, therefore we must modify other definitions (perhaps even

periodically if the speed of light depends on time), but rather would be

to realize that our assumptions in the current definition of the metre

are wrong and must be changed.

Think about the definition of the metre and kilogram. Why are the

original definitions not used? One reason is because on noticed that

the mass of the reference kilogram has actually changed with time. By

your logic, that should not have been possible, since, by definition,

the reference kilogram has a mass of exactly one kilogram.

Nevertheless, the change was detected.

nos...@de-ster.demon.nl (J. J. Lodder) writes:

> Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

> wrote:

>

>> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

>> <tko...@netcologne.de> writes:

>>

>>> Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

>>>

>>>> Of course you could claim that the speed of light is constant.
> Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

> wrote:

>

>> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

>> <tko...@netcologne.de> writes:

>>

>>> Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

>>>

>>>

>>> The way that the SI units are defined now, the speed of light

>>> in vacuum is indeed constant. If you measure anything else than

>>> 299792458 m/s, recalibrate your measurement devices.

>>

>> The speed of light is now a defined quantity, that is true. However,

>> that is merely a practical matter. If the speed of light really were

>> variable, that could still be detected just as easily as before the

>> redefinition.

>

> It is incredible how much misunderstanding there is

> on such a simple subject.

I agree. :-)
>>> The way that the SI units are defined now, the speed of light

>>> in vacuum is indeed constant. If you measure anything else than

>>> 299792458 m/s, recalibrate your measurement devices.

>>

>> The speed of light is now a defined quantity, that is true. However,

>> that is merely a practical matter. If the speed of light really were

>> variable, that could still be detected just as easily as before the

>> redefinition.

>

> It is incredible how much misunderstanding there is

> on such a simple subject.

> To clear things up:

> The speed of light cannot 'really' be variable.

light in a certain time and thus by definition the speed of light is

constant. However, the metre (and the second) used to be defined

differently than they are now. Back then, it was certainly possible, in

principle, to detect a change in the speed of light. One could perform

the same experiment today. Nature doesn't know what the current SI

definitions are.

The metre is now defined as it is a) as a practical matter and b)

because we assume that the speed of light is constant. If the speed of

light did change, i.e. of one does observations like those of RÃ¸mer,

Fizeau, etc., using, say, pendulum clocks as a reference, and notice

that it changes, then one has measured the change. The consequence

would not be to point to the SI definition and say that it cannot

change, therefore we must modify other definitions (perhaps even

periodically if the speed of light depends on time), but rather would be

to realize that our assumptions in the current definition of the metre

are wrong and must be changed.

Think about the definition of the metre and kilogram. Why are the

original definitions not used? One reason is because on noticed that

the mass of the reference kilogram has actually changed with time. By

your logic, that should not have been possible, since, by definition,

the reference kilogram has a mass of exactly one kilogram.

Nevertheless, the change was detected.

Jun 29, 2021, 2:49:55 PM6/29/21

to

On 21/06/29 8:41 AM, J. J. Lodder wrote:

> Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

> Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

>> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

>>> Nicolaas Vroom <nicolaa...@pandora.be> schrieb:
>>>

>>>> Ofcourse you could claim that the speed of light is constant.

...

...

> It is incredible how much misunderstanding there is

> on such a simple subject.

I think there are several reasons for it. See below..
> on such a simple subject.

> The speed of light cannot 'really' be variable.

> Why?

> In order for the speed of light to be measurable at all

> we need to define both a length and a time unit.

> ...
> In order for the speed of light to be measurable at all

> we need to define both a length and a time unit.

Indeed we can agree that basically this is determined by the

metric of space. Any massless field will have a propagation

speed defined by the metric, but any measurement of speed also

has to use that metric. So the result is fixed.

This should make clear that a change cannot be observed using

the local metric, but not everyone will agree that this means

it cannot 'really' change.

We know that seen from another point in space, the speed of

light can be different if space-time is curved (as it usually

is..) You may then claim that it is only an 'apparent' slowing

down if e.g. light falls into a black hole, but then we only

change the discussion to the meaning of 'apparent' and 'really'.

We can't maintain that it is unobservable, in that case.

Obviously, by specifying the "locally observed speed of light"

this problem is avoided, but if you do not want to talk about

different points in space or in time, then it becomes a bit

too obvious that 'change' cannot be observed, it looks a lot

like not wanting to observe it, then..

Finally, it's also conceivable that the photon mass at some

point in the future will become nonzero, theoretically (another

theory says that this has already happened).

All these things explain the ongoing discussion, I think..

--

Jos

Jun 30, 2021, 3:37:22 PM6/30/21

to

On 6/29/21 1:49 PM, Jos Bergervoet wrote:

> Indeed we can agree that basically this is determined by the metric

> of space. Any massless field will have a propagation speed defined

> by the metric, but any measurement of speed also has to use that

> metric. So the result is fixed.

You mean the metric of spacetime (not space). And this applies only to a
> Indeed we can agree that basically this is determined by the metric

> of space. Any massless field will have a propagation speed defined

> by the metric, but any measurement of speed also has to use that

> metric. So the result is fixed.

massless field -- there is no fundamental reason for the photon to be

massless, it's just that its mass is observed to be consistent with zero

and an extremely tiny upper limit (< 10^-18 eV).

> This should make clear that a change cannot be observed using the

> local metric, but not everyone will agree that this means it cannot

> 'really' change.

everyone who understands the issues will agree for a massless field.

But of course that's the rub -- we don't really know whether the photon

field is truly massless.

> We know that seen from another point in space, the speed of light

> can be different if space-time is curved (as it usually is..) You

> may then claim that it is only an 'apparent' slowing down if e.g.

> light falls into a black hole, but then we only change the discussion

> to the meaning of 'apparent' and 'really'. We can't maintain that it

> is unobservable, in that case.

argument that never comes up because the constancy of the vacuum speed

of light applies only locally.

All this only applies to massless fields, and we don't really know

whether the photon field is truly massless. Of course we never will....

Tom Roberts

Jul 5, 2021, 7:49:13 AM7/5/21

to

Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

> > This should make clear that a change cannot be observed using the

> > local metric, but not everyone will agree that this means it cannot

> > 'really' change.

> The constancy of the vacuum speed of light applies only locally, so

> everyone who understands the issues will agree for a massless field.

What are the issue?

> But of course that's the rub -- we don't really know whether the photon

> field is truly massless.

Is it possible to measure this photon field?

Is it not true,

that when it is possible to measure the energy of a light pulse,

that then individual photons also have energy,

and as a consequence individual photons also have a mass?

This implies when a star emits light it also emits mass.

> > We know that seen from another point in space, the speed of light

> > can be different if space-time is curved (as it usually is..) You

> > may then claim that it is only an 'apparent' slowing down if e.g.

> > light falls into a black hole, but then we only change the discussion

> > to the meaning of 'apparent' and 'really'. We can't maintain that it

> > is unobservable, in that case.

> That's just an argument over the meanings of words. Moreover it's an

> argument that never comes up because the constancy of the vacuum speed

> of light applies only locally.

Does that implies that globally, considering a light pulse (explosion)

emitted over a long distance, that its speed is not constant?

Is this text from Wikipedia true?:

"Photons are massless,[a] so they always move at the speed of light in vacuum,

299792458 m/s (or about 186,282 mi/s).

[a] The photon's invariant mass (also called "rest mass" for massive particles)

is believed to be exactly zero. This is the notion of particle mass generally

used by modern physicists. The photon does have a nonzero relativistic mass,

depending on its energy, but this varies according to the frame of reference."

Nicolaas Vroom

> On 6/29/21 1:49 PM, Jos Bergervoet wrote:

> > Indeed we can agree that basically this is determined by the metric

> > of space. Any massless field will have a propagation speed defined

> > by the metric, but any measurement of speed also has to use that

> > metric. So the result is fixed.

> You mean the metric of spacetime (not space). And this applies only to a

> massless field -- there is no fundamental reason for the photon to be

> massless, it's just that its mass is observed to be consistent with zero

> and an extremely tiny upper limit (< 10^-18 eV).

How is this mass observed? Or should I write upper limit?
> > Indeed we can agree that basically this is determined by the metric

> > of space. Any massless field will have a propagation speed defined

> > by the metric, but any measurement of speed also has to use that

> > metric. So the result is fixed.

> You mean the metric of spacetime (not space). And this applies only to a

> massless field -- there is no fundamental reason for the photon to be

> massless, it's just that its mass is observed to be consistent with zero

> and an extremely tiny upper limit (< 10^-18 eV).

> > This should make clear that a change cannot be observed using the

> > local metric, but not everyone will agree that this means it cannot

> > 'really' change.

> The constancy of the vacuum speed of light applies only locally, so

> everyone who understands the issues will agree for a massless field.

> But of course that's the rub -- we don't really know whether the photon

> field is truly massless.

Is it not true,

that when it is possible to measure the energy of a light pulse,

that then individual photons also have energy,

and as a consequence individual photons also have a mass?

This implies when a star emits light it also emits mass.

> > We know that seen from another point in space, the speed of light

> > can be different if space-time is curved (as it usually is..) You

> > may then claim that it is only an 'apparent' slowing down if e.g.

> > light falls into a black hole, but then we only change the discussion

> > to the meaning of 'apparent' and 'really'. We can't maintain that it

> > is unobservable, in that case.

> That's just an argument over the meanings of words. Moreover it's an

> argument that never comes up because the constancy of the vacuum speed

> of light applies only locally.

emitted over a long distance, that its speed is not constant?

Is this text from Wikipedia true?:

"Photons are massless,[a] so they always move at the speed of light in vacuum,

299792458 m/s (or about 186,282 mi/s).

[a] The photon's invariant mass (also called "rest mass" for massive particles)

is believed to be exactly zero. This is the notion of particle mass generally

used by modern physicists. The photon does have a nonzero relativistic mass,

depending on its energy, but this varies according to the frame of reference."

Nicolaas Vroom

Jul 5, 2021, 2:36:05 PM7/5/21

to

On 6/28/21 8:28 AM, Nicolaas Vroom wrote:

> I have to measure the speed of light and I have to measure the speed

> of an electron v_e or of a cosmic ray.[...]
> I have to measure the speed of light and I have to measure the speed

Before worrying about that, you should first study significant digits,

experimental resolutions, and errorbars. That will give you perspective

about your questions. Your example numbers are unrealistic and

unworkable, which that study would teach you to avoid.

> A more advanced question is what to do in case the speed of cosmic

> ray is higher than the speed of light i.e 299792458,0995 m/sec versus

> 299792458 m/sec in vacuum.

is OK and not unexpected, as long as your measurement resolution is

about 0.04 m/sec or larger.

Tom Roberts

Jul 5, 2021, 2:36:46 PM7/5/21

to

On 6/29/21 1:41 AM, J. J. Lodder wrote:

> [...] The speed of light cannot 'really' be variable. [...]

You make far too many assumptions to be reasonable.

Certainly the (vacuum) speed of light COULD vary, it's just that in the

world we inhabit, with current technology, it is observed to not vary

significantly (when measured using standard clocks and rulers at rest in

some locally inertial frame). But it certainly is possible that in the

future we will develop technology with greatly improved resolution and

discover that it actually does vary in the world we inhabit. It is also

possible we will never find it varies -- science is a JOURNEY, not a

destination.

Yes, c is really a units conversion factor, IN THE SPACETIME MODEL. And

yes, it is not possible for the symmetry speed of Lorentzian manifolds

to vary. But the restriction you imagine is on that symmetry speed, and

not really on the (vacuum) speed of light -- there is no fundamental

reason why light must propagate at the symmetry speed.

Note that it is an historical accident that we call the symmetry speed

of Lorentzian manifolds by the name "the speed of light" (with "vacuum"

implied). Light really has nothing to do with it.

Tom Roberts

> [...] The speed of light cannot 'really' be variable. [...]

You make far too many assumptions to be reasonable.

Certainly the (vacuum) speed of light COULD vary, it's just that in the

world we inhabit, with current technology, it is observed to not vary

significantly (when measured using standard clocks and rulers at rest in

some locally inertial frame). But it certainly is possible that in the

future we will develop technology with greatly improved resolution and

discover that it actually does vary in the world we inhabit. It is also

possible we will never find it varies -- science is a JOURNEY, not a

destination.

Yes, c is really a units conversion factor, IN THE SPACETIME MODEL. And

yes, it is not possible for the symmetry speed of Lorentzian manifolds

to vary. But the restriction you imagine is on that symmetry speed, and

not really on the (vacuum) speed of light -- there is no fundamental

reason why light must propagate at the symmetry speed.

Note that it is an historical accident that we call the symmetry speed

of Lorentzian manifolds by the name "the speed of light" (with "vacuum"

implied). Light really has nothing to do with it.

Tom Roberts

Jul 5, 2021, 2:37:03 PM7/5/21

to

Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

wrote:

> In article <1pbi21w.1t8rob81hgck0aN%nos...@de-ster.demon.nl>,

> nos...@de-ster.demon.nl (J. J. Lodder) writes:=20
wrote:

> In article <1pbi21w.1t8rob81hgck0aN%nos...@de-ster.demon.nl>,

>=20

> > Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

> > wrote:

> >

> >> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

> >> <tko...@netcologne.de> writes:=20
> > wrote:

> >

> >> In article <sb7pcc$243$3...@newsreader4.netcologne.de>, Thomas Koenig

> >>

> >>> Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> >>>

> >>>> Of course you could claim that the speed of light is constant.

> >>>

> >>> The way that the SI units are defined now, the speed of light

> >>> in vacuum is indeed constant. If you measure anything else than

> >>> 299792458 m/s, recalibrate your measurement devices.

> >>

> >> The speed of light is now a defined quantity, that is true. However=
> >>> Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> >>>

> >>>> Of course you could claim that the speed of light is constant.

> >>>

> >>> The way that the SI units are defined now, the speed of light

> >>> in vacuum is indeed constant. If you measure anything else than

> >>> 299792458 m/s, recalibrate your measurement devices.

> >>

,

> >> that is merely a practical matter. If the speed of light really wer=

e

> >> variable, that could still be detected just as easily as before the

> >> redefinition.

> >

> > It is incredible how much misunderstanding there is

> > on such a simple subject.

>=20
> >> variable, that could still be detected just as easily as before the

> >> redefinition.

> >

> > It is incredible how much misunderstanding there is

> > on such a simple subject.

> I agree. :-)

>=20

> > To clear things up:

> > The speed of light cannot 'really' be variable.

>=20
> > The speed of light cannot 'really' be variable.

> It is true that the metre is now defined as the distance travelled by

> light in a certain time and thus by definition the speed of light is=20
> constant. However, the metre (and the second) used to be defined=20

> differently than they are now.

Certainly, the definitions have changed several times even.
> Back then, it was certainly possible, in=20

> principle, to detect a change in the speed of light.=20

Yes, but what would this mean? [1]

The changes would have depended on the particular -human- choices

made for the definitions of those units.

And even so, my point stands.

The only physically sound way

of defining those independent length and time units

is to chose them in mutally compatible ways.

(so with a factor c between them,

for example both based on atomic hyperfine structure)

> One could perform the same experiment today.

Those experiments -are- done routinely in standards labs.

Only the name differs, they are nowadays called:

'the calibration of a secondary meter standard'.

> Nature doesn't know what the current SI definitions are.

All units are LGM or human inventions.

> The metre is now defined as it is a) as a practical matter and b)=20

> because we assume that the speed of light is constant.

We assume nothing about that.
We define it to be the case.

(and take the consequences, if any, somewhere else)

> If the speed of light did change, i.e. of one does observations like th=

ose

> of R=C3=B8mer, Fizeau, etc., using, say, pendulum clocks as a reference=

, and

> notice that it changes, then one has measured the change. The conseque=

nce

> would not be to point to the SI definition and say that it cannot=20

> change, therefore we must modify other definitions (perhaps even=20

> periodically if the speed of light depends on time), but rather would b=

e

> to realize that our assumptions in the current definition of the metre

> are wrong and must be changed.

Certainly. If yesterday's pistons won't fit tomorrow's engines
> to realize that our assumptions in the current definition of the metre

> are wrong and must be changed.

we must overhaul all of our physics.

> Think about the definition of the metre and kilogram. Why are the=20

> original definitions not used? One reason is because on noticed that

> the mass of the reference kilogram has actually changed with time.

Not really.
> the mass of the reference kilogram has actually changed with time.

All that we can say is that we can't reproduce the relative mass

of certain chunks of metal as accurately as we thought we could.

(reasons mostly unknown)

That's why the kilo is now locked to Planck's constant,

with, one hopes, better reproducibility.

> By your logic, that should not have been possible, since, by definitio=

n,

> the reference kilogram has a mass of exactly one kilogram.

> Nevertheless, the change was detected.

See above, no.
> the reference kilogram has a mass of exactly one kilogram.

> Nevertheless, the change was detected.

The other so-called kilograms may have changed instead.

Only if all metal kilograms drift in the same way wrt Planck

there will be a new problem.

Jan

[1] For your amusement:

there is creationist folklore about the speed of light.

Taking a handful of the very first measurements from the 19th century

(with large errors)

they conclude that the speed of light varies enormously.

And (you could not possibly have guessed it of course)

so enormously that the apparent age of the universe of billions of years

fits precisely with the creation of the earth 6 000 years ago.

Jul 5, 2021, 2:37:21 PM7/5/21

to

Op dinsdag 29 juni 2021 om 08:51:03 UTC+2 schreef Nicolaas Vroom:

> Op zondag 27 juni 2021 om 00:57:19 UTC+2 schreef Phillip Helbig:

> "The metre is the length of the path travelled by light in vacuum

> during a time interval of 1/299 792 458 of a second."

>

> So with this definition, the speed of light is necessarily exactly

> 299 792 458 meters/second. Experiments to "measure the speed of light"

> (e.g., by timing a light pulse over a measured distance) are actually

> measuring a *length* (in meters). E.g., if your measurement shows

> that it takes light 100 nanoseconds to travel a certain distance,

> then what you've really done is measure that distance to be

> (as 100e-9 seconds * 299 792 458 meters/second) = 29.9792458 meters.

That is the measurement by person "A" in vacuum.

What that means that "A" first places two markers a certain distance away

and then sends a light signal between those two markers.

What "A" measures is that it takes 100 nanoseconds to travel that distance.

His conclusion is that the distance is 29.9792458 meters.

Suppose "B" does 'exactly' the same, but "B" measures that it takes less than

100 nanosecs and his is conclusion is that the distance is 29.9792458 meters.

Is that physical possible?

In order for "B" to perform the experiment he has to rely on a very detailed

description (supplied by "A" or ?), on how to perform this experiment.

For example it should tell you how to measure the time (everywhere in the

universe) and give a clear definition exactly what a vacuum is.

This type of information is of critical importance to calculate the distance

travelled by a light pulse and secondly to establish if that distance is

everywhere the same.

Implying that the speed of light is a physical constant and also everywhere

the same. (Personally I doubt that)

The same type of description is also required if you want to measure

the speed of an electron or a cosmic ray.

In that case you first have to measure the 'fixed' distance using a light

pulse, secondly you have to measure the time t2 it takes for the cosmic ray

to travel that same 'fixed' distance.

Dividing the 'fixed' distance by t2 gives you the speed of the cosmic ray.

Nicolaas Vroom

> Op zondag 27 juni 2021 om 00:57:19 UTC+2 schreef Phillip Helbig:

>

> Nicolaas Vroom.

>

> [[Mod. note -- This topic is a bit tricky, because to measure a speed

> in meters/second, we need to know what a meter is, and what a second is.

That is correct
> Nicolaas Vroom.

>

> [[Mod. note -- This topic is a bit tricky, because to measure a speed

> in meters/second, we need to know what a meter is, and what a second is.

> "The metre is the length of the path travelled by light in vacuum

> during a time interval of 1/299 792 458 of a second."

>

> So with this definition, the speed of light is necessarily exactly

> 299 792 458 meters/second. Experiments to "measure the speed of light"

> (e.g., by timing a light pulse over a measured distance) are actually

> measuring a *length* (in meters). E.g., if your measurement shows

> that it takes light 100 nanoseconds to travel a certain distance,

> then what you've really done is measure that distance to be

> (as 100e-9 seconds * 299 792 458 meters/second) = 29.9792458 meters.

What that means that "A" first places two markers a certain distance away

and then sends a light signal between those two markers.

What "A" measures is that it takes 100 nanoseconds to travel that distance.

His conclusion is that the distance is 29.9792458 meters.

Suppose "B" does 'exactly' the same, but "B" measures that it takes less than

100 nanosecs and his is conclusion is that the distance is 29.9792458 meters.

Is that physical possible?

In order for "B" to perform the experiment he has to rely on a very detailed

description (supplied by "A" or ?), on how to perform this experiment.

For example it should tell you how to measure the time (everywhere in the

universe) and give a clear definition exactly what a vacuum is.

This type of information is of critical importance to calculate the distance

travelled by a light pulse and secondly to establish if that distance is

everywhere the same.

Implying that the speed of light is a physical constant and also everywhere

the same. (Personally I doubt that)

The same type of description is also required if you want to measure

the speed of an electron or a cosmic ray.

In that case you first have to measure the 'fixed' distance using a light

pulse, secondly you have to measure the time t2 it takes for the cosmic ray

to travel that same 'fixed' distance.

Dividing the 'fixed' distance by t2 gives you the speed of the cosmic ray.

Nicolaas Vroom

Jul 5, 2021, 2:44:18 PM7/5/21

to

Op dinsdag 29 juni 2021 om 20:49:55 UTC+2 schreef Jos Bergervoet:

in the universe

> > In order for the speed of light to be measurable at all

> > we need to define both a length and a time unit.

> > ...

The time unit is the most tricky if the method to measure time

involves light signals and when you want to use time to measure

the speed of light. This looks like circular reasoning.

> Indeed we can agree that basically this is determined by the

> metric of space.

Exactly what is determined by the metric of space?

This raises also the question how is this metric measured.

> Any massless field will have a propagation

> speed defined by the metric, but any measurement of speed also

> has to use that metric. So the result is fixed.

This also looks like circular reasoning.

In Newtonian context the first thing you have to observe the

objects studied (positions and speeds) during a certain period.

Important is: That these observations require the speed of light.

Secondly using Newton's Law you can calculate the masses of the

objects studied. That calculation does not require the speed of light

> This should make clear that a change cannot be observed using

> the local metric, but not everyone will agree that this means

> it cannot 'really' change.

>

> We know that seen from another point in space, the speed of

> light can be different if space-time is curved (as it usually

> is..)

I assume when objects are involved.

Tricky sentence. What means seen? Normally seen implies the speed

of light.

> All these things explain the ongoing discussion, I think..

The ongoing discussion involves both the speed of light and the

speed of a cosmic ray.

The issue is to describe how both are defined and how both are

actual measured in a clear and unambiguous manner.

That is not easy.

Nicolaas Vroom

> On 21/06/29 8:41 AM, J. J. Lodder wrote:

> > It is incredible how much misunderstanding there is

> > on such a simple subject.

> I think there are several reasons for it. See below..

> > The speed of light cannot 'really' be variable.

> > Why?

Why can the speed of light not be different in different places
> > It is incredible how much misunderstanding there is

> > on such a simple subject.

> I think there are several reasons for it. See below..

> > The speed of light cannot 'really' be variable.

> > Why?

in the universe

> > In order for the speed of light to be measurable at all

> > we need to define both a length and a time unit.

> > ...

involves light signals and when you want to use time to measure

the speed of light. This looks like circular reasoning.

> Indeed we can agree that basically this is determined by the

> metric of space.

This raises also the question how is this metric measured.

> Any massless field will have a propagation

> speed defined by the metric, but any measurement of speed also

> has to use that metric. So the result is fixed.

In Newtonian context the first thing you have to observe the

objects studied (positions and speeds) during a certain period.

Important is: That these observations require the speed of light.

Secondly using Newton's Law you can calculate the masses of the

objects studied. That calculation does not require the speed of light

> This should make clear that a change cannot be observed using

> the local metric, but not everyone will agree that this means

> it cannot 'really' change.

>

> We know that seen from another point in space, the speed of

> light can be different if space-time is curved (as it usually

> is..)

Tricky sentence. What means seen? Normally seen implies the speed

of light.

> All these things explain the ongoing discussion, I think..

speed of a cosmic ray.

The issue is to describe how both are defined and how both are

actual measured in a clear and unambiguous manner.

That is not easy.

Nicolaas Vroom

Jul 5, 2021, 2:45:32 PM7/5/21

to

In article <a5210321-687e-464d...@googlegroups.com>,

law. Also, if photons had rest mass, then photons of different energies

would travel at different speeds. That effect is used to set limits on

neutrino masses.

> Is it not true,

> that when it is possible to measure the energy of a light pulse,

> that then individual photons also have energy,

> and as a consequence individual photons also have a mass?

E = mc^2 so in that sense photons have mass...

> This implies when a star emits light it also emits mass.

...and as a result the mass of a star decreases when it emits light.

The question is whether the rest mass is zero.

> > > We know that seen from another point in space, the speed of light

> > > can be different if space-time is curved (as it usually is..) You

> > > may then claim that it is only an 'apparent' slowing down if e.g.

> > > light falls into a black hole, but then we only change the discussion

> > > to the meaning of 'apparent' and 'really'. We can't maintain that it

> > > is unobservable, in that case.

> > That's just an argument over the meanings of words. Moreover it's an

> > argument that never comes up because the constancy of the vacuum speed

> > of light applies only locally.

>

> Does that implies that globally, considering a light pulse (explosion)

> emitted over a long distance, that its speed is not constant?

Look up "Shapiro delay".

> Is this text from Wikipedia true?:

> "Photons are massless,[a] so they always move at the speed of light in vacuum,

> 299792458 m/s (or about 186,282 mi/s).

> [a] The photon's invariant mass (also called "rest mass" for massive particles)

> is believed to be exactly zero. This is the notion of particle mass generally

> used by modern physicists. The photon does have a nonzero relativistic mass,

> depending on its energy, but this varies according to the frame of reference."

Yes.

Nicolaas Vroom <nicolaa...@pandora.be> writes:

> Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

> > On 6/29/21 1:49 PM, Jos Bergervoet wrote:

> > > Indeed we can agree that basically this is determined by the metric

> > > of space. Any massless field will have a propagation speed defined

> > > by the metric, but any measurement of speed also has to use that

> > > metric. So the result is fixed.

> > You mean the metric of spacetime (not space). And this applies only to a

> > massless field -- there is no fundamental reason for the photon to be

> > massless, it's just that its mass is observed to be consistent with zero

> > and an extremely tiny upper limit (< 10^-18 eV).

>

> How is this mass observed? Or should I write upper limit?

The upper limit comes from the observed accuracy of the inverse-square
> Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

> > On 6/29/21 1:49 PM, Jos Bergervoet wrote:

> > > Indeed we can agree that basically this is determined by the metric

> > > of space. Any massless field will have a propagation speed defined

> > > by the metric, but any measurement of speed also has to use that

> > > metric. So the result is fixed.

> > You mean the metric of spacetime (not space). And this applies only to a

> > massless field -- there is no fundamental reason for the photon to be

> > massless, it's just that its mass is observed to be consistent with zero

> > and an extremely tiny upper limit (< 10^-18 eV).

>

> How is this mass observed? Or should I write upper limit?

law. Also, if photons had rest mass, then photons of different energies

would travel at different speeds. That effect is used to set limits on

neutrino masses.

> Is it not true,

> that when it is possible to measure the energy of a light pulse,

> that then individual photons also have energy,

> and as a consequence individual photons also have a mass?

> This implies when a star emits light it also emits mass.

The question is whether the rest mass is zero.

> > > We know that seen from another point in space, the speed of light

> > > can be different if space-time is curved (as it usually is..) You

> > > may then claim that it is only an 'apparent' slowing down if e.g.

> > > light falls into a black hole, but then we only change the discussion

> > > to the meaning of 'apparent' and 'really'. We can't maintain that it

> > > is unobservable, in that case.

> > That's just an argument over the meanings of words. Moreover it's an

> > argument that never comes up because the constancy of the vacuum speed

> > of light applies only locally.

>

> Does that implies that globally, considering a light pulse (explosion)

> emitted over a long distance, that its speed is not constant?

> Is this text from Wikipedia true?:

> "Photons are massless,[a] so they always move at the speed of light in vacuum,

> 299792458 m/s (or about 186,282 mi/s).

> [a] The photon's invariant mass (also called "rest mass" for massive particles)

> is believed to be exactly zero. This is the notion of particle mass generally

> used by modern physicists. The photon does have a nonzero relativistic mass,

> depending on its energy, but this varies according to the frame of reference."

Jul 7, 2021, 4:12:28 AM7/7/21

to

but all accurate laboratory 'speed of light' measurements

were done using --standing waves--.

No propagation timing involved,

Jan

Jul 7, 2021, 4:31:54 AM7/7/21

to

Tom Roberts <tjrobe...@sbcglobal.net> wrote:

> On 6/29/21 1:41 AM, J. J. Lodder wrote:

> > [...] The speed of light cannot 'really' be variable. [...]

>

> You make far too many assumptions to be reasonable.

>

> Certainly the (vacuum) speed of light COULD vary, it's just that in the

> world we inhabit, with current technology, it is observed to not vary

> significantly (when measured using standard clocks and rulers at rest in

> some locally inertial frame).

That's where you are mistaken.
> On 6/29/21 1:41 AM, J. J. Lodder wrote:

> > [...] The speed of light cannot 'really' be variable. [...]

>

> You make far too many assumptions to be reasonable.

>

> Certainly the (vacuum) speed of light COULD vary, it's just that in the

> world we inhabit, with current technology, it is observed to not vary

> significantly (when measured using standard clocks and rulers at rest in

> some locally inertial frame).

There is no such thing as a god-given 'standard clock'

or 'standard ruler'.

> But it certainly is possible that in the

> future we will develop technology with greatly improved resolution and

> discover that it actually does vary in the world we inhabit.

If variation is found we will have to discover (or decide!)

what it is that varies.

(speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

> It is also possible we will never find it varies -- science is a JOURNEY,

> not a destination.

This is not a matter that can be settled

by means of naive empiricism,

by just 'measuring' the 'speed of light',

Jan

Jul 7, 2021, 4:32:24 AM7/7/21

to

In the SI as it stands it is impossible in principle

to measure the speed of light,

Jan

Jul 7, 2021, 1:20:20 PM7/7/21

to

Tom Roberts <tjrobe...@sbcglobal.net> wrote:

> On 6/29/21 1:49 PM, Jos Bergervoet wrote:

> > Indeed we can agree that basically this is determined by the metric

> > of space. Any massless field will have a propagation speed defined

> > by the metric, but any measurement of speed also has to use that

> > metric. So the result is fixed.

>

> You mean the metric of spacetime (not space). And this applies only to a

> massless field -- there is no fundamental reason for the photon to be

> massless, it's just that its mass is observed to be consistent with zero

> and an extremely tiny upper limit (< 10^-18 eV).

>

> > This should make clear that a change cannot be observed using the

> > local metric, but not everyone will agree that this means it cannot

> > 'really' change.

>

> The constancy of the vacuum speed of light applies only locally, so

> everyone who understands the issues will agree for a massless field.

> But of course that's the rub -- we don't really know whether the photon

> field is truly massless.

Why this insistence on photons being or not being 'truly' massless?
> On 6/29/21 1:49 PM, Jos Bergervoet wrote:

> > Indeed we can agree that basically this is determined by the metric

> > of space. Any massless field will have a propagation speed defined

> > by the metric, but any measurement of speed also has to use that

> > metric. So the result is fixed.

>

> You mean the metric of spacetime (not space). And this applies only to a

> massless field -- there is no fundamental reason for the photon to be

> massless, it's just that its mass is observed to be consistent with zero

> and an extremely tiny upper limit (< 10^-18 eV).

>

> > This should make clear that a change cannot be observed using the

> > local metric, but not everyone will agree that this means it cannot

> > 'really' change.

>

> The constancy of the vacuum speed of light applies only locally, so

> everyone who understands the issues will agree for a massless field.

> But of course that's the rub -- we don't really know whether the photon

> field is truly massless.

It is nothing but a red herring.

All troubles that might arise from a non-zero photon mass

are easily killed in advance

by adding 'in the limit of infinite frequence'

to the definition of the speed of light.

Since the photon mass cannot be measured,

even the longest radio waves that we can make

still have an 'infinite' frequency.

> > We know that seen from another point in space, the speed of light

> > can be different if space-time is curved (as it usually is..) You

> > may then claim that it is only an 'apparent' slowing down if e.g.

> > light falls into a black hole, but then we only change the discussion

> > to the meaning of 'apparent' and 'really'. We can't maintain that it

> > is unobservable, in that case.

>

> That's just an argument over the meanings of words. Moreover it's an

> argument that never comes up because the constancy of the vacuum speed

> of light applies only locally.

>

> All this only applies to massless fields, and we don't really know

> whether the photon field is truly massless. Of course we never will....

A photon mass corresponding to a wavelength of the size of the universe

cannot be measured in principle.

Our poor photon has only a few decades left to have mass in,

Jan

Jul 7, 2021, 1:20:27 PM7/7/21

to

In article <1pbx3kk.jywko91341fc9N%nos...@de-ster.demon.nl>,

> Tom Roberts <tjrobe...@sbcglobal.net> wrote:

>=20

> > Certainly the (vacuum) speed of light COULD vary, it's just that in t=

> > But it certainly is possible that in the=20

> > future we will develop technology with greatly improved resolution an=

> > It is also possible we will never find it varies -- science is a JOUR=

NEY,

> > not a destination.

>=20

> Empty ideology.=20

rulers and clocks, or by measuring wavelength and frequency, or=20

whatever, in the lab. It is theoretically possible that the speed of=20

light could change with time and that we could measure it.

The fact that the speed of light is now a defined quantity does not=20

somehow magically make it impossible to make a measurement which was=20

possible with the original SI definitions.

Obviously, if such a change were detected, then it would be a good idea=20

to change the definition of the metre.

> Tom Roberts <tjrobe...@sbcglobal.net> wrote:

>=20

> > On 6/29/21 1:41 AM, J. J. Lodder wrote:

> > > [...] The speed of light cannot 'really' be variable. [...]

> >=20
> > > [...] The speed of light cannot 'really' be variable. [...]

> > You make far too many assumptions to be reasonable.

> >=20
> > Certainly the (vacuum) speed of light COULD vary, it's just that in t=

he

> > world we inhabit, with current technology, it is observed to not vary

> > significantly (when measured using standard clocks and rulers at rest=
> > world we inhabit, with current technology, it is observed to not vary

in

> > some locally inertial frame).

>=20
> > some locally inertial frame).

> That's where you are mistaken.

> There is no such thing as a god-given 'standard clock'

> or 'standard ruler'.

>=20
> There is no such thing as a god-given 'standard clock'

> or 'standard ruler'.

> > But it certainly is possible that in the=20

> > future we will develop technology with greatly improved resolution an=

d

> > discover that it actually does vary in the world we inhabit.

>=20
> > discover that it actually does vary in the world we inhabit.

> A meaningless statement.

> If variation is found we will have to discover (or decide!)

> what it is that varies.

> (speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

>=20
> If variation is found we will have to discover (or decide!)

> what it is that varies.

> (speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

> > It is also possible we will never find it varies -- science is a JOUR=

NEY,

> > not a destination.

>=20

> Empty ideology.=20

> This is not a matter that can be settled

> by means of naive empiricism,

> by just 'measuring' the 'speed of light',

One could measure the speed of light via several different types of=20
> by means of naive empiricism,

> by just 'measuring' the 'speed of light',

rulers and clocks, or by measuring wavelength and frequency, or=20

whatever, in the lab. It is theoretically possible that the speed of=20

light could change with time and that we could measure it.

The fact that the speed of light is now a defined quantity does not=20

somehow magically make it impossible to make a measurement which was=20

possible with the original SI definitions.

Obviously, if such a change were detected, then it would be a good idea=20

to change the definition of the metre.

Jul 7, 2021, 1:20:49 PM7/7/21

to

In article <1pbx48j.1xmowyy1fnnxmfN%nos...@de-ster.demon.nl>,

Because, for practical reasons, the metre is now defined as the

distance light travels in a certain time. That is our definition, which

Nature doesn't know about. We cannot magically influence Nature by

changing definitions.

With time, more and more SI units have been defined via fiat values of

constants of Nature. This is a purely practical matter, because we

ASSUME that they do not change with time. The definitions are also

coupled with experiments which are relatively easy to reproduce.

Back when the metre was defined as 1/10,000,000 of the distance from

north pole to equator along the meridian through Paris, that did not

somehow make it impossible to measure the change in the size of the

Earth with time.

distance light travels in a certain time. That is our definition, which

Nature doesn't know about. We cannot magically influence Nature by

changing definitions.

With time, more and more SI units have been defined via fiat values of

constants of Nature. This is a purely practical matter, because we

ASSUME that they do not change with time. The definitions are also

coupled with experiments which are relatively easy to reproduce.

Back when the metre was defined as 1/10,000,000 of the distance from

north pole to equator along the meridian through Paris, that did not

somehow make it impossible to measure the change in the size of the

Earth with time.

Jul 11, 2021, 5:57:55 PM7/11/21

to

On 7/5/21 6:49 AM, Nicolaas Vroom wrote:

> Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

>> [...] there is no fundamental reason for the photon to be
> Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

>> massless, it's just that its mass is observed to be consistent with

>> zero and an extremely tiny upper limit (< 10^-18 eV).

>

>> zero and an extremely tiny upper limit (< 10^-18 eV).

>

> How is this mass observed? Or should I write upper limit?

Various methods are used. Go to http://pdg.lbl.gov, and look up the
photon in the listings. Their data cards give many references.

> Is it possible to measure this photon field?

> Is it not true, that when it is possible to measure the energy of a

> light pulse, that then individual photons also have energy,

> and as a consequence individual photons also have a mass?

> This implies when a star emits light it also emits mass.

true that when a star emits light, the star's mass decreases. Indeed the

famous equation E = mc^2 relates the emitted light's energy to the

star's mass decrease.

>> the constancy of the vacuum speed of light applies only locally.

>

> Does that implies that globally, considering a light pulse

> (explosion) emitted over a long distance, that its speed is not

> constant?

This depends on what you mean by "speed".
> (explosion) emitted over a long distance, that its speed is not

> constant?

Physicists normally use "speed" to mean a local measurement involving

standard clocks and rulers at rest in a locally inertial frame. With

that meaning the vacuum speed of light does not vary in our best

theories of the world we inhabit, and this is solidly supported

experimentally.

Over long distances in a non-flat manifold (i.e. gravitation is

important), measuring the "speed" of light is ambiguous -- depending on

one's choice of coordinates one can obtain just about any value (which

makes such "measurements" useless).

> Is this text from Wikipedia true?: "Photons are massless,[a] so they

> always move at the speed of light in vacuum, 299792458 m/s (or about

> 186,282 mi/s).

> [a] The photon's invariant mass (also called "rest mass" for massive

> particles) is believed to be exactly zero. This is the notion of

> particle mass generally used by modern physicists. The photon does

> have a nonzero relativistic mass, depending on its energy, but this

> varies according to the frame of reference."

actually energy (so of course it is frame dependent).

The energy of an object is the time component of its 4-momentum,

measured in some locally inertial frame; this obviously depends on which

frame is used. The mass of an object is the norm of its 4-momentum; this

is an invariant and can be measured in any frame.

This last corresponds to the ancient meaning of mass as

"how much stuff is present" -- that clearly is intrinsic

to an object, and must therefore be invariant.

> Why can the speed of light not be different in different places in

> the universe

reason that it should. Our current best model of the universe has the

local vacuum speed of light the same everywhere.

> The time unit is the most tricky if the method to measure time

> involves light signals and when you want to use time to measure the

> speed of light. This looks like circular reasoning.

the unit of time. The second is defined as the duration of 9,192,631,770

cycles of the hyperfine ground-state transition of Cs-133.

> Exactly what is determined by the metric of space?

Distances between points in space. But as I said it is the metric of

spacetime that is important here; it determines distances between points

in spacetime. One must of course account for the difference in

measurement units for space and time (if there is a difference).

> This raises also the question how is this metric measured.

Tom Roberts

Jul 12, 2021, 7:43:38 AM7/12/21

to

Op maandag 5 juli 2021 om 20:45:32 UTC+2 schreef Phillip Helbig:

https://en.wikipedia.org/wiki/Shapiro_time_delay#Further_reading

Specific reference 2 the book by Ray d'Inverno.

From this book I studied section 15.6

To read my comments:

http://users.telenet.be/nicvroom/Book_Review_Introducing_Einstein's_Relativity.htm#Par%2015.6

I also wrote reflection #3 and #4 related to Shapiro time delay.

See:

http://users.telenet.be/nicvroom/Book_Review_Introducing_Einstein's_Relativity.htm#ref3

My overall comment is that based on reading the book that the Shapiro time

delay does not say anything specific about the speed of light.

Specific, IMO fig 15.13 in the book is seems to be wrong:

The lightray is not bended.

My impression is that that the speed of light is not a local issue,

specific if you want to measure the distance with a planet around

a star outside our solar system.

The reflections #3 and #4 are important reading?

Specific my understanding that the lightrays are twice bended.

This makes the 'Shapiro time delay' a very complex physical process.

A different experiment studied is the reflection of radio waves against

the Heaviside or E layer in the ionosphere.

My comments you can read here:

http://users.telenet.be/nicvroom/wik_Kennelly-Heaviside.htm

Specific study the reflections #1 and #2.

I performed this experiment myself when I was at University.

This experiment raises certain physical questions based around the issue:

exactly what is measured.

The most important lesson for every experiment is:

1) Describe the experiment as detailed as possible.

2) Describe all the physical results of the experiment as detailed as possible.

3) A mathematical investigation of the results is of minor importance.

Part of the problem is that a mathematical theory often depends on

(physical) parameters like mass and the speed of light. These parameters also

have to be established by means of experiments (or observations). These

make the desriptions of the experiments even more important.

Nicolaas Vroom.

> In article <a5210321-687e-464d...@googlegroups.com>,

> Nicolaas Vroom <nicolaa...@pandora.be> writes:

>

> > Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

> Nicolaas Vroom <nicolaa...@pandora.be> writes:

>

> > Op woensdag 30 juni 2021 om 21:37:22 UTC+2 schreef Tom Roberts:

> > > That's just an argument over the meanings of words. Moreover it's an

> > > argument that never comes up because the constancy of the vacuum speed

> > > of light applies only locally.

> >

> > Does that implies that globally, considering a light pulse (explosion)

> > emitted over a long distance, that its speed is not constant?

> Look up "Shapiro delay".

First I found this link:
> > > argument that never comes up because the constancy of the vacuum speed

> > > of light applies only locally.

> >

> > Does that implies that globally, considering a light pulse (explosion)

> > emitted over a long distance, that its speed is not constant?

> Look up "Shapiro delay".

https://en.wikipedia.org/wiki/Shapiro_time_delay#Further_reading

Specific reference 2 the book by Ray d'Inverno.

From this book I studied section 15.6

To read my comments:

http://users.telenet.be/nicvroom/Book_Review_Introducing_Einstein's_Relativity.htm#Par%2015.6

I also wrote reflection #3 and #4 related to Shapiro time delay.

See:

http://users.telenet.be/nicvroom/Book_Review_Introducing_Einstein's_Relativity.htm#ref3

My overall comment is that based on reading the book that the Shapiro time

delay does not say anything specific about the speed of light.

Specific, IMO fig 15.13 in the book is seems to be wrong:

The lightray is not bended.

My impression is that that the speed of light is not a local issue,

specific if you want to measure the distance with a planet around

a star outside our solar system.

The reflections #3 and #4 are important reading?

Specific my understanding that the lightrays are twice bended.

This makes the 'Shapiro time delay' a very complex physical process.

A different experiment studied is the reflection of radio waves against

the Heaviside or E layer in the ionosphere.

My comments you can read here:

http://users.telenet.be/nicvroom/wik_Kennelly-Heaviside.htm

Specific study the reflections #1 and #2.

I performed this experiment myself when I was at University.

This experiment raises certain physical questions based around the issue:

exactly what is measured.

The most important lesson for every experiment is:

1) Describe the experiment as detailed as possible.

2) Describe all the physical results of the experiment as detailed as possible.

3) A mathematical investigation of the results is of minor importance.

Part of the problem is that a mathematical theory often depends on

(physical) parameters like mass and the speed of light. These parameters also

have to be established by means of experiments (or observations). These

make the desriptions of the experiments even more important.

Nicolaas Vroom.

Jul 12, 2021, 1:18:57 PM7/12/21

to

Op woensdag 7 juli 2021 om 19:20:49 UTC+2 schreef Phillip Helbig:

You can do that, but now you create a new issue:

How is this CERTAIN TIME defined and more important measured in

detail in practice.

That is a very important issue because we can all measure the same

time, but when we compare all the distances measured,

(which should be identical) they are not.

That means at the most 1 person measures the distance of 299792458

meters correct assuming we all measure 1 second.

It is the same as the above ambiguous advice:

"YOU should recalibrate your measurement device."

But if my measurement also is different from all of the others

how much should I adapt my time measurement device?

The above raised issue about CERTAIN TIME becomes even more important

if you want to measure the speed of a cosmic ray (etc).

Nicolaas Vroom

[[Mod. note -- An old nautical saying is "never go to sea with two

chronometers; always take one or three". In this context, that means

that people doing precision timing & clock development often use an

an ensemble of co-located clocks (typically 5-10 are used), all of similar

construction and method-of-operation, so that they can inter-compare the

clocks. Since all the clocks in the ensemble are co-located, they should

all record the same elapsed-time readings; more accurately, any differences

in their elapsed-time readings can be ascribed to clock drifts (errors).

Inter-comparing the clocks can thus give a statistical estimate of the

clocks' accuracy (quantified by "Allen variance" -- see the Wikipedia

article of that name if you want details). If any clock is an outlier

in the ensemble, it's flagged as not-working-properly (a.k.a "broken").

For example, if I have 6 co-located clocks A,B,C,D,E,F which were

initially all synchronized, and after one mean solar day they read

A: 86400.00000124

B: 86400.00000382

C: 86400.00000226

D: 86400.00000275

E: 86400.00229071

F: 86400.00000390

seconds, then I can reasonably say that

(a) clock E is a clear outlier and is not working properly

(b) on average, clocks A,B,C,D,F run fast by about 3 +/- 1 microseconds/day

-- jt]]

> In article <1pbx48j.1xmowyy1fnnxmfN%nos...@de-ster.demon.nl>,

> nos...@de-ster.demon.nl (J. J. Lodder) writes:

> > Thomas Koenig <tko...@netcologne.de> wrote:

> > > The way that the SI units are defined now, the speed of light

> > > in vacuum is indeed constant. If you measure anything else than

> > > 299792458 m/s, recalibrate your measurement devices.

> >

> > Nonsense.

> > In the SI as it stands it is impossible in principle

> > to measure the speed of light,

> Because, for practical reasons, the metre is now defined as the

> distance light travels in a certain time. That is our definition,

Etc,
> nos...@de-ster.demon.nl (J. J. Lodder) writes:

> > Thomas Koenig <tko...@netcologne.de> wrote:

> > > The way that the SI units are defined now, the speed of light

> > > in vacuum is indeed constant. If you measure anything else than

> > > 299792458 m/s, recalibrate your measurement devices.

> >

> > Nonsense.

> > In the SI as it stands it is impossible in principle

> > to measure the speed of light,

> Because, for practical reasons, the metre is now defined as the

> distance light travels in a certain time. That is our definition,

You can do that, but now you create a new issue:

How is this CERTAIN TIME defined and more important measured in

detail in practice.

That is a very important issue because we can all measure the same

time, but when we compare all the distances measured,

(which should be identical) they are not.

That means at the most 1 person measures the distance of 299792458

meters correct assuming we all measure 1 second.

It is the same as the above ambiguous advice:

"YOU should recalibrate your measurement device."

But if my measurement also is different from all of the others

how much should I adapt my time measurement device?

The above raised issue about CERTAIN TIME becomes even more important

if you want to measure the speed of a cosmic ray (etc).

Nicolaas Vroom

[[Mod. note -- An old nautical saying is "never go to sea with two

chronometers; always take one or three". In this context, that means

that people doing precision timing & clock development often use an

an ensemble of co-located clocks (typically 5-10 are used), all of similar

construction and method-of-operation, so that they can inter-compare the

clocks. Since all the clocks in the ensemble are co-located, they should

all record the same elapsed-time readings; more accurately, any differences

in their elapsed-time readings can be ascribed to clock drifts (errors).

Inter-comparing the clocks can thus give a statistical estimate of the

clocks' accuracy (quantified by "Allen variance" -- see the Wikipedia

article of that name if you want details). If any clock is an outlier

in the ensemble, it's flagged as not-working-properly (a.k.a "broken").

For example, if I have 6 co-located clocks A,B,C,D,E,F which were

initially all synchronized, and after one mean solar day they read

A: 86400.00000124

B: 86400.00000382

C: 86400.00000226

D: 86400.00000275

E: 86400.00229071

F: 86400.00000390

seconds, then I can reasonably say that

(a) clock E is a clear outlier and is not working properly

(b) on average, clocks A,B,C,D,F run fast by about 3 +/- 1 microseconds/day

-- jt]]

Jul 13, 2021, 4:48:35 AM7/13/21

to

On 7/7/21 3:31 AM, J. J. Lodder wrote:

as ISO have been created to agree upon such standards and publish them.

It OUGHT to be obvious that a standard clock measures its elapsed proper

time using standard seconds, and a standard ruler measures distance

using some standard of length, such as meters.

>> But it certainly is possible that in the

>> future we will develop technology with greatly improved resolution and

>> discover that it actually does vary in the world we inhabit.

>

> A meaningless statement.

> If variation is found we will have to discover (or decide!)

> what it is that varies.

> (speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

My statement is not meaningless: if the speed of light is measured to

vary, then it is certainly varying -- DUH!

Whether something else is also varying is a different question; to date

no significant variation has been found in any of the things you

mention. Such measurements have excellent accuracy, 9 or more

significant digits.

> This is not a matter that can be settled

> by means of naive empiricism,

> by just 'measuring' the 'speed of light',

How else could one detect a variation in the speed of light????

Tom Roberts

> Tom Roberts <tjrobe...@sbcglobal.net> wrote:

>> Certainly the (vacuum) speed of light COULD vary, it's just that in the

>> world we inhabit, with current technology, it is observed to not vary

>> significantly (when measured using standard clocks and rulers at rest in

>> some locally inertial frame).

>

>> Certainly the (vacuum) speed of light COULD vary, it's just that in the

>> world we inhabit, with current technology, it is observed to not vary

>> significantly (when measured using standard clocks and rulers at rest in

>> some locally inertial frame).

>

> There is no such thing as a god-given 'standard clock'

> or 'standard ruler'.

Of course. Such standards are determined by humans. Organizations such
> or 'standard ruler'.

as ISO have been created to agree upon such standards and publish them.

It OUGHT to be obvious that a standard clock measures its elapsed proper

time using standard seconds, and a standard ruler measures distance

using some standard of length, such as meters.

>> But it certainly is possible that in the

>> future we will develop technology with greatly improved resolution and

>> discover that it actually does vary in the world we inhabit.

>

> A meaningless statement.

> If variation is found we will have to discover (or decide!)

> what it is that varies.

> (speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

vary, then it is certainly varying -- DUH!

Whether something else is also varying is a different question; to date

no significant variation has been found in any of the things you

mention. Such measurements have excellent accuracy, 9 or more

significant digits.

> This is not a matter that can be settled

> by means of naive empiricism,

> by just 'measuring' the 'speed of light',

Tom Roberts

Jul 13, 2021, 4:48:35 AM7/13/21

to

>> The constancy of the vacuum speed of light applies only locally, so

>> everyone who understands the issues will agree for a massless field.

>> But of course that's the rub -- we don't really know whether the photon

>> field is truly massless.

>

> Why this insistence on photons being or not being 'truly' massless?

Because it is an important aspect of whether the (vacuum) speed of light
>> everyone who understands the issues will agree for a massless field.

>> But of course that's the rub -- we don't really know whether the photon

>> field is truly massless.

>

> Why this insistence on photons being or not being 'truly' massless?

varies. In our current best models of the world, nonzero photon mass and

a varying vacuum speed of light are equivalent (either both are valid or

both are invalid).

> All troubles that might arise from a non-zero photon mass

> are easily killed in advance

> by adding 'in the limit of infinite frequence'

> to the definition of the speed of light.

actually measure the speed of light, one could only measure it

approximately -- hopeless for such a fundamental aspect of the world we

inhabit, and one used in so much technology.

The correct way to deal with a nonzero photon mass is to distinguish

between the two quite different meanings of c:

1) the vacuum speed of light

2) the symmetry speed of Lorentzian manifolds

If (1) is found to vary, no fundamental revolution in physics is

involved, we just start using a nonzero photon mass [#]. If (2) is found

to vary, it would refute every theory of physics we have today.

[Historically, in 1905 this distinction was not known

and Einstein intermixed them inappropriately. His

second postulate is actually about (2), not (1).

Today we consider SR to be a theory of geometry,

not electrodynamics (the subject of his 1905 paper).]

[#] See Proca theory.

> Since the photon mass cannot be measured,

> even the longest radio waves that we can make

> still have an 'infinite' frequency.

a) The photon mass has been measured many times; at present the

best measurements are consistent with zero and an upper limit

of 10^-18 eV.

b) EM waves with frequencies from kilohertz to terahertz have

been measured -- NONE are "infinite".

> A photon mass corresponding to a wavelength of the size of the universe

> cannot be measured in principle.

Tom Roberts

Jul 13, 2021, 4:48:35 AM7/13/21

to

On 7/5/21 1:45 PM, Phillip Helbig (undress to reply) wrote:

> In article <a5210321-687e-464d...@googlegroups.com>,

> Nicolaas Vroom <nicolaa...@pandora.be> writes:

>> tindividual photons also have energy, and as a consequence
> In article <a5210321-687e-464d...@googlegroups.com>,

> Nicolaas Vroom <nicolaa...@pandora.be> writes:

>> individual photons also have a mass?

>

> E = mc^2 so in that sense photons have mass...

That is not mass in any sense, that is ENERGY.
>

> E = mc^2 so in that sense photons have mass...

>> This implies when a star emits light it also emits mass.

> ...and as a result the mass of a star decreases when it emits light.

light with total energy E, the star's mass decreases by m, with E=mc^2.

This is what that equation actually means, not your misinterpretation above.

> The question is whether the rest mass is zero.

object's 4-momentum. In the object's rest frame that corresponds to its

energy [#]. The "c" in E=mc^2 is just a units conversion factor;

physicists often use units with c=1 and omit "c" from equations. So, for

instance, the PDG now lists particle masses in Ev, not the older Ev/c^2.

[#] This of course does not apply to light, which has no

rest frame. "Rest mass" could never apply to light; mass

does, and is zero.

Tom Roberts

Jul 31, 2021, 10:41:25 AM7/31/21

to

Op maandag 12 juli 2021 om 19:18:57 UTC+2 schreef Nicolaas Vroom:

triple redundancy.

> In this context, that means

> that people doing precision timing & clock development often use an

> an ensemble of co-located clocks (typically 5-10 are used), all of similar

> construction and method-of-operation, so that they can inter-compare the

> clocks. Since all the clocks in the ensemble are co-located, they should

> all record the same elapsed-time readings; more accurately, any differences

> in their elapsed-time readings can be ascribed to clock drifts (errors).

> Inter-comparing the clocks can thus give a statistical estimate of the

> clocks' accuracy If any clock is an outlier

is slightly different.

Starting point is the text by Tom Roberts:

This defines two moments t1 (start pulse) and t2 (stop pulse)

Secondly we use these two events to transmit a light ray at t1 at position p1

and mark the position p2 (along a rod) of the light ray at t2.

As described by Phillip Helbig.

The length between p2 and p1 defines the standard distance of 299792458 m.

The problem is that it is very difficult to establish the position p2 of the

light ray at t2.

What you need? is a clock at p2 and when that clock reaches 9,192,631,770

cycles the light signal should arrive from p1 simultaneous.

IMO this does not work because exactly where should you place this clock?

I hope that someone comes up with better detailed standard description how to

perform this experiment accurate, such that others can perform the same.

Using such a standard experiment we can now perform many experiments:

1) at the same location and observe if all the distances are the same.

2) at different locations and observe if all the distances are the same.

In case 1, in principle, all people should measure the same distance but

I expect they will not. The major reason is inaccuracy inherent in the

experiment.

In case 2, all people don't have to measure the same distance. The major

reason is that the speed of light is not the same in all directions,

caused by gravity considerations.

Nicolaas Vroom

> Op woensdag 7 juli 2021 om 19:20:49 UTC+2 schreef Phillip Helbig:

> > Because, for practical reasons, the metre is now defined as the

> > distance light travels in a certain time. That is our definition,

> > distance light travels in a certain time. That is our definition,

> [[Mod. note -- An old nautical saying is "never go to sea with two

> chronometers; always take one or three".

In process control the best way to implement redundancy is:
> chronometers; always take one or three".

triple redundancy.

> In this context, that means

> that people doing precision timing & clock development often use an

> an ensemble of co-located clocks (typically 5-10 are used), all of similar

> construction and method-of-operation, so that they can inter-compare the

> clocks. Since all the clocks in the ensemble are co-located, they should

> all record the same elapsed-time readings; more accurately, any differences

> in their elapsed-time readings can be ascribed to clock drifts (errors).

> Inter-comparing the clocks can thus give a statistical estimate of the

> in the ensemble, it's flagged as not-working-properly (a.k.a "broken").

I fully agree with you. But the experiment we are discussing here
is slightly different.

Starting point is the text by Tom Roberts:

> Your premise is wrong -- we do not use the speed of light to determine

> the unit of time. The second is defined as the duration of 9,192,631,770

> cycles of the hyperfine ground-state transition of Cs-133.

That means first we use this clock to measure 1 second.
> the unit of time. The second is defined as the duration of 9,192,631,770

> cycles of the hyperfine ground-state transition of Cs-133.

This defines two moments t1 (start pulse) and t2 (stop pulse)

Secondly we use these two events to transmit a light ray at t1 at position p1

and mark the position p2 (along a rod) of the light ray at t2.

As described by Phillip Helbig.

The length between p2 and p1 defines the standard distance of 299792458 m.

The problem is that it is very difficult to establish the position p2 of the

light ray at t2.

What you need? is a clock at p2 and when that clock reaches 9,192,631,770

cycles the light signal should arrive from p1 simultaneous.

IMO this does not work because exactly where should you place this clock?

I hope that someone comes up with better detailed standard description how to

perform this experiment accurate, such that others can perform the same.

Using such a standard experiment we can now perform many experiments:

1) at the same location and observe if all the distances are the same.

2) at different locations and observe if all the distances are the same.

In case 1, in principle, all people should measure the same distance but

I expect they will not. The major reason is inaccuracy inherent in the

experiment.

In case 2, all people don't have to measure the same distance. The major

reason is that the speed of light is not the same in all directions,

caused by gravity considerations.

Nicolaas Vroom

Aug 3, 2021, 3:12:03 AM8/3/21

to

Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de>

wrote:

> In article <1pbx48j.1xmowyy1fnnxmfN%nos...@de-ster.demon.nl>,

> nos...@de-ster.demon.nl (J. J. Lodder) writes:

>

> > Thomas Koenig <tko...@netcologne.de> wrote:

> >

> > > Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> > >

> > > > Ofcourse you could claim that the speed of light is constant.

> > >

> > > The way that the SI units are defined now, the speed of light

> > > in vacuum is indeed constant. If you measure anything else than

> > > 299792458 m/s, recalibrate your measurement devices.

> >

> > Nonsense.

> > In the SI as it stands it is impossible in principle

> > to measure the speed of light,

>

> Because, for practical reasons, the metre is now defined as the

> distance light travels in a certain time. That is our definition, which

> Nature doesn't know about. We cannot magically influence Nature by

> changing definitions.

Nature doesn't know about our definitions,

but we had better chose our definitions

in ways that agree with what nature is like.

To the best of our knowledge we live in a spacetime

that is characterised by an absolute velocity,

confusingly also called c.

If this is indeed the case we have to choose our length and time

units in accordance with this fundamental fact.

(like the SI already does)

If physical things (such as G or alpha for example) are indeed variable

we would otherwise obtain unphysical results,

such as deluding ourselves into a belief that c could be variable.

> With time, more and more SI units have been defined via fiat values of

> constants of Nature. This is a purely practical matter, because we

> ASSUME that they do not change with time. The definitions are also

> coupled with experiments which are relatively easy to reproduce.

>

> Back when the metre was defined as 1/10,000,000 of the distance from

> north pole to equator along the meridian through Paris, that did not

> somehow make it impossible to measure the change in the size of the

> Earth with time.

This meridian of the earth thing

was never more than a propaganda device.

For metrological reasons anno 1800 a length standard

could only be two scratches on a metal bar.

A reason had to be invented to declare a particular pair of scratches

on a particular better than all others. (to break all local chauvinisms)

As a matter of historical fact the earth was never remeasured

for obtaining a more accurate meter,

Jan

wrote:

> In article <1pbx48j.1xmowyy1fnnxmfN%nos...@de-ster.demon.nl>,

> nos...@de-ster.demon.nl (J. J. Lodder) writes:

>

> > Thomas Koenig <tko...@netcologne.de> wrote:

> >

> > > Nicolaas Vroom <nicolaa...@pandora.be> schrieb:

> > >

> > > > Ofcourse you could claim that the speed of light is constant.

> > >

> > > The way that the SI units are defined now, the speed of light

> > > in vacuum is indeed constant. If you measure anything else than

> > > 299792458 m/s, recalibrate your measurement devices.

> >

> > Nonsense.

> > In the SI as it stands it is impossible in principle

> > to measure the speed of light,

>

> Because, for practical reasons, the metre is now defined as the

> distance light travels in a certain time. That is our definition, which

> Nature doesn't know about. We cannot magically influence Nature by

> changing definitions.

but we had better chose our definitions

in ways that agree with what nature is like.

To the best of our knowledge we live in a spacetime

that is characterised by an absolute velocity,

confusingly also called c.

If this is indeed the case we have to choose our length and time

units in accordance with this fundamental fact.

(like the SI already does)

If physical things (such as G or alpha for example) are indeed variable

we would otherwise obtain unphysical results,

such as deluding ourselves into a belief that c could be variable.

> With time, more and more SI units have been defined via fiat values of

> constants of Nature. This is a purely practical matter, because we

> ASSUME that they do not change with time. The definitions are also

> coupled with experiments which are relatively easy to reproduce.

>

> Back when the metre was defined as 1/10,000,000 of the distance from

> north pole to equator along the meridian through Paris, that did not

> somehow make it impossible to measure the change in the size of the

> Earth with time.

was never more than a propaganda device.

For metrological reasons anno 1800 a length standard

could only be two scratches on a metal bar.

A reason had to be invented to declare a particular pair of scratches

on a particular better than all others. (to break all local chauvinisms)

As a matter of historical fact the earth was never remeasured

for obtaining a more accurate meter,

Jan

Aug 3, 2021, 3:12:03 AM8/3/21

to

> In article <1pbx3kk.jywko91341fc9N%nos...@de-ster.demon.nl>,

> nos...@de-ster.demon.nl (J. J. Lodder) writes:

>

> nos...@de-ster.demon.nl (J. J. Lodder) writes:

>

> > Tom Roberts <tjrobe...@sbcglobal.net> wrote:

> >

> > > On 6/29/21 1:41 AM, J. J. Lodder wrote:

> > > > [...] The speed of light cannot 'really' be variable. [...]

> > >

> >

> > > On 6/29/21 1:41 AM, J. J. Lodder wrote:

> > > > [...] The speed of light cannot 'really' be variable. [...]

> > >

> > > You make far too many assumptions to be reasonable.

> > >

> > >

> > > Certainly the (vacuum) speed of light COULD vary, it's just that in t=

> he

> > > world we inhabit, with current technology, it is observed to not vary

> > > significantly (when measured using standard clocks and rulers at rest=

> in

> > > some locally inertial frame).

> >

> he

> > > world we inhabit, with current technology, it is observed to not vary

> > > significantly (when measured using standard clocks and rulers at rest=

> in

> > > some locally inertial frame).

> >

> > That's where you are mistaken.

> > There is no such thing as a god-given 'standard clock'

> > or 'standard ruler'.

> >

> > There is no such thing as a god-given 'standard clock'

> > or 'standard ruler'.

> >

> > > But it certainly is possible that in the

> > > future we will develop technology with greatly improved resolution an=

> d

> > > discover that it actually does vary in the world we inhabit.

> >

> d

> > > discover that it actually does vary in the world we inhabit.

> >

> > A meaningless statement.

> > If variation is found we will have to discover (or decide!)

> > what it is that varies.

> > (speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

> >

> > If variation is found we will have to discover (or decide!)

> > what it is that varies.

> > (speed?, rulers?, clocks?, all three?, some 'fundamental' 'constant'?)

> >

> > > It is also possible we will never find it varies -- science is a JOUR=

> NEY,

> > > not a destination.

> >

> > Empty ideology.
> NEY,

> > > not a destination.

> >

> > This is not a matter that can be settled

> > by means of naive empiricism,

> > by just 'measuring' the 'speed of light',

>

> One could measure the speed of light via several different types of

> > by means of naive empiricism,

> > by just 'measuring' the 'speed of light',

>

> One could measure the speed of light via several different types of

> rulers and clocks, or by measuring wavelength and frequency, or

> whatever, in the lab. It is theoretically possible that the speed of

> light could change with time and that we could measure it.

That is utterly and thoroughly wrong.
You can potter about with all kinds of measuring equipment,

and you might see that things vary.

That you have measured the speed of light to be varying

must be a theoretical assumption.

Your units could be changing instead.

(as a result of something else changing, alpha for example)

> The fact that the speed of light is now a defined quantity does not

> somehow magically make it impossible to make a measurement which was

> possible with the original SI definitions.

You can still do exactly the same measurements.
(and in fact these are done routinely)

Only the interpretation has changed.

What used to be called 'a speed of light measurement'

is nowadays called 'the calibration of a (secondary) meter standard'.

> Obviously, if such a change were detected, then it would be a good idea

> to change the definition of the metre.

You would need a lot of much better ideas than that,
such as reinventing spacetime,

Jan

Aug 6, 2021, 11:09:03 PM8/6/21

to

Tom Roberts <tjrobe...@sbcglobal.net> wrote:

> On 7/7/21 3:31 AM, J. J. Lodder wrote:

> > Tom Roberts <tjrobe...@sbcglobal.net> wrote:

> >> Certainly the (vacuum) speed of light COULD vary, it's just that in the

> >> world we inhabit, with current technology, it is observed to not vary

> >> significantly (when measured using standard clocks and rulers at rest in

> >> some locally inertial frame).

> >

> > There is no s

> On 7/7/21 3:31 AM, J. J. Lodder wrote:

> > Tom Roberts <tjrobe...@sbcglobal.net> wrote:

> >> Certainly the (vacuum) speed of light COULD vary, it's just that in the

> >> world we inhabit, with current technology, it is observed to not vary

> >> significantly (when measured using standard clocks and rulers at rest in

> >> some locally inertial frame).

> >

> > There is no s