Quick check with
perl -MDevel::Peek -we "print Dump qq/\xa0/,Dump qq/\x{a0}/"
shows that "\xA0" and "\x{A0}" both interpreted as not-Unicode.
However, this is untrue for perl version 5.6.x, and behaviour of 5.6.x
version is exactly as I described -- \x{A0} for Unicode and \xA0 for
ordinary chars within strings, and that leads to my confusion.
I know that 5.6.x has weak Unicode support, but such kind of parsing seems
to me quite reasonable.
Does anyone knows why this changed, and why it is a bad idea to treat string
as Unicode once it has *any* \x{...} chars inside, and not only above 0x100,
as currently documentation says?
Honestly, I thought of this \x{...} parsing as a quite stable behaviour and
could even rely on that in my programs.
Thanks in advance,
Vadim.
To me, when not using it to incorporate otherwise inaccesible characters,
the "\x{A0}" mechanism is useful primarily for disambiguation.
I use it regularly when I think it can contribute to clearer code, and in
such use it is directly analagous to the use of braces for disambiguation
of variables, as in "the ${word}s", or "${scalar}[0]".
Hugo