On Dec 6, 6:31 pm, Tony Arcieri <tony.arci...
> > The erlang way (eep-0010, eep-0011) seems to be:
> > * binaries which contain text can contain utf-8 or latin1. (latin1,
> > being a fixed single-byte encoding, is effectively just binary)
> > * when the operation needs to know which, you say explicitly at the
> > point of operation: e.g. "~s" vs "~ts" for format string, "/utf8"
> > modifier for matching characters in binaries, "unicode" flag to re for
> > regexp matching.
> I'd rather that be automatic, with strings "knowing" they're UTF-8.
I wouldn't, as it leads to all sorts of complexities:
* How do strings get assigned their initial encoding? (String
literals, strings read from stdin, strings read from files, strings
from the O/S like directory listings, ...)
* What happens if string with encoding A is concatenated with a string
of encoding B? What about with the empty string? In general, for every
method which takes string argument(s), how is its behaviour modified
dependent on the encoding of those string(s)?
* For every method which *returns* a string, how is the encoding of
that string chosen?
* Repeat for all system libraries and all third-party libraries
Even if Reia has only a one-bit flag, or two classes (utf8/binary),
these issues all still arise. And yet at the point of interacting with
the operating system, for example when calling read() or readdir(),
strings are just binary data. So all this hidden state has to be
synthesised when data arrives, and thrown away when data leaves.