What encoding are you wanting to use?
There is support for utf8,utf16, and utf32. If you want to manipulate
it as a string you'll have to convert
it to utf8, but if you want to keep the original encoding you can just
use []byte. Most of the string functions also have a comparable []byte function.
>Similarly, I have to do my own translation for writing out to
> a file or network socket. Have I missed something builtin to Go or a
> community package that would make this simpler?
encoding and decoding streams is usually pretty easy. Encoding packages tend
to satisfy the io.Reader and io.Writer interfaces, so encoding while
writing out
to a file is as simple as using two calls to io.Copy()
> Second, assuming my observation is correct, my followup is:
>
> Is this an intentional omission or just a case of no one has needed to
> implement it yet?
Since the encoding a text being read by a Go program has nothing to do
with the standard library or the Go language, I can't imagine why any
encoding would be intentionally omitted.
The omission is most likely to do with a lack of a need. UTF-8 does a
fine job and is used widely.
--
=====================
http://jessta.id.au
You'd be wrong in Java too since the length of a Java string is the number
of the UTF-16 code points, not the number of characters. You'd just be
wrong less often.
Eoghan