On Fri, Nov 15, 2013 at 8:11 AM, <
nicolas...@gmail.com> wrote:
> In C, using a pointer to type A to write data, and reading the same data
> with a pointer to type B was also considered "safe" for decades.
> But one day, compilers made more agressive optimizations and such
> programmatic idiom did not work any more (strict-aliasing problem).
> Indeed, this broke a lot of programs, even Linux kernel. The answer was :
> such thing has never been explicitely allowed by the C specs.
That may well be the perspective of a C programmer, but I don't think
it's an accurate representation of how C developed. During the
initial C standardization process, this issue was discussed. At that
time, the standard writers decided that they would not require that
data written by a pointer to type A be readable using a pointer to
type B. This is explicitly spelled out in the ANSI C89 standard aka
the ISO C90 standard, and similar wording has carried forward into
later C and C++ standards. I believe this decision was made based on
experience with Fortran compilers.
Over time C compiler implementors modified their compilers to follow
the new standard more closely. They also strengthened their
optimizations. This increasingly caused programs to fail due to type
aliasing. At that time programmers complained, but the answer was not
"such thing has never been explicitely allowed by the C specs." The
answer was "the C language standard specifically permit compilers to
make this optimization." And, of course, most production compilers
provided options to disable these optimizations.
None of this has anything to do with Go, which is a different
language with a different philosophy.
Ian