Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The need of Unicode types in C++0x

0 views
Skip to first unread message

Ioannis Vranos

unread,
Oct 1, 2008, 5:59:23 AM10/1/08
to
Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.


I don't consider being compatible with C99 as an excuse.

Ioannis Vranos

unread,
Oct 1, 2008, 6:03:07 AM10/1/08
to
Correction:


Ioannis Vranos wrote:
> Hi, I am currently learning QT, a portable C++ framework which comes
> with both a commercial and GPL license, and which provides conversion
> operations to its various types to/from standard C++ types.
>

==> For example its QString type provides a toStdWString()that returns a

Ioannis Vranos

unread,
Oct 1, 2008, 12:57:27 PM10/1/08
to
REH wrote:
> On Oct 1, 5:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
> If I understand what you are asking...
>
> wstring in the standard defines neither the character set, nor the
> encoding. Given that Unicode is currently a 21-bit standard, how can
> wstring support the largest character set on a system where wchar_t is
> 16-bits (assuming a one-character-per-element encoding)? You could
> only support the BMP (which is exactly what most systems and language
> that "claim" Unicode support are really capable of).


I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?

Erik Wikström

unread,
Oct 1, 2008, 1:30:28 PM10/1/08
to

Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility, and since
we will get Unicode types it is a good idea to use wchar_t for encodings
not the same size as the Unicode types.

--
Erik Wikström

Pete Becker

unread,
Oct 1, 2008, 1:50:12 PM10/1/08
to
On 2008-10-01 12:57:27 -0400, Ioannis Vranos
<ivr...@no.spam.nospamfreemail.gr> said:

>
> If that system supports Unicode as a system-specific type, why can't
> wchar_t be made wide enough as that system-specific Unicode type, in
> that system?

It can be. But the language definition doesn't require it to be, and
with many implementations it's not. So if you want to traffic in
Unicode you have basically three options: ensure that your character
type can handle 21 bits, drop down to a subset of Unicode (as REH
mentioned, the BMP fits in 16 bit code points), or use a variable-width
encoding like UTF-8 or UTF-16.

Or you can wait for C++0x, which will provide char16_t and char32_t.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

REH

unread,
Oct 1, 2008, 12:28:35 PM10/1/08
to
On Oct 1, 5:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:

If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).

REH

James Kanze

unread,
Oct 2, 2008, 3:37:47 AM10/2/08
to
On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:

> Hi, I am currently learning QT, a portable C++ framework which
> comes with both a commercial and GPL license, and which
> provides conversion operations to its various types to/from
> standard C++ types.

> For example its QString type provides a toWString() that
> returns a std::wstring with its Unicode contents.

In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).

> So, since wstring supports the largest character set, why do
> we need explicit Unicode types in C++?

Because wstring doesn't guarantee Unicode, and implementers
can't change what it does guarantee in their particular
implementation.

> I think what is needed is a "unicode" locale or at the most,
> some unicode locales.

Well, to begin with, there are only two sizes of character
types; the various Unicode encoding forms come in three sizes,
so you already have a size mismatch. And since wchar_t already
has a meaning, we can't just arbitrarily change it.

> I don't consider being compatible with C99 as an excuse.

How about being compatible with C++03?

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze

unread,
Oct 2, 2008, 3:41:03 AM10/2/08
to
On Oct 1, 6:28 pm, REH <spamj...@stny.rr.com> wrote:

[...]


> wstring in the standard defines neither the character set, nor the
> encoding. Given that Unicode is currently a 21-bit standard, how can
> wstring support the largest character set on a system where wchar_t is
> 16-bits (assuming a one-character-per-element encoding)? You could
> only support the BMP (which is exactly what most systems and language
> that "claim" Unicode support are really capable of).

No. Most systems that claim Unicode support on 16 bits use
UTF-16. Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. (In
practice, I find that UTF-8 works fine for a lot of things.)

Hendrik Schober

unread,
Oct 2, 2008, 6:21:38 AM10/2/08
to
James Kanze wrote:
> On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
> wrote:
>> Hi, I am currently learning QT, a portable C++ framework which
>> comes with both a commercial and GPL license, and which
>> provides conversion operations to its various types to/from
>> standard C++ types.
>
>> For example its QString type provides a toWString() that
>> returns a std::wstring with its Unicode contents.
>
> In what encoding format? And what if the "usual" encoding for
> wstring isn't Unicode (the case on many Unix platforms).

<curious>
What are those implementations using for 'wchar_t'?
</curious>

Schobi

Ioannis Vranos

unread,
Oct 2, 2008, 6:26:03 AM10/2/08
to
Erik Wikstr�m wrote:
>
> Because it has been to narrow for 5 to 10 years and the compiler vendor
> does not want to take any chances with backward compatibility,


How will it break backward compatibility, if the size of whcar_t changes?

> and since
> we will get Unicode types it is a good idea to use wchar_t for encodings
> not the same size as the Unicode types.


I am talking about not needing those Unicode types since we have wchar_t
and locales.

Ioannis Vranos

unread,
Oct 2, 2008, 6:34:25 AM10/2/08
to
Pete Becker wrote:
> On 2008-10-01 12:57:27 -0400, Ioannis Vranos
> <ivr...@no.spam.nospamfreemail.gr> said:
>
>>
>> If that system supports Unicode as a system-specific type, why can't
>> wchar_t be made wide enough as that system-specific Unicode type, in
>> that system?
>
> It can be. But the language definition doesn't require it to be, and
> with many implementations it's not


C++03 mentions:


"Type wchar_t is a distinct type whose values can represent distinct
codes for all members of the *largest* extended character set specified
among the supported *locales* (22.1.1). Type wchar_t shall have the same
size, signedness, and alignment requirements (3.9) as one of the other
integral types, called its underlying type".

Ioannis Vranos

unread,
Oct 2, 2008, 6:39:45 AM10/2/08
to
James Kanze wrote:
> On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
> wrote:
>
>> So, since wstring supports the largest character set, why do
>> we need explicit Unicode types in C++?
>
> Because wstring doesn't guarantee Unicode, and implementers
> can't change what it does guarantee in their particular
> implementation.


Again, if the implementers want Unicode, they can add a Unicode local
and make wchar_t size large enough to match it.


In other words, C++0x could require all implementations to provide
specific Unicode locales that will work with existing facilities
(wchar_t, wstring, etc).

REH

unread,
Oct 2, 2008, 8:34:55 AM10/2/08
to
On Oct 2, 3:41 am, James Kanze <james.ka...@gmail.com> wrote:
> No.  Most systems that claim Unicode support on 16 bits use
> UTF-16.  Granted, it's a multi-element encoding, but if you're
> doing anything serious, effectively, so is UTF-32.  (In
> practice, I find that UTF-8 works fine for a lot of things.)
>
The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 (although Windows does
support MBCS, but I am not sure if that is truly UTF-8).

REH

Hendrik Schober

unread,
Oct 2, 2008, 9:25:33 AM10/2/08
to
REH wrote:
> On Oct 2, 3:41 am, James Kanze <james.ka...@gmail.com> wrote:
>> No. Most systems that claim Unicode support on 16 bits use
>> UTF-16. Granted, it's a multi-element encoding, but if you're
>> doing anything serious, effectively, so is UTF-32. (In
>> practice, I find that UTF-8 works fine for a lot of things.)
>>
> The ones I am familiar with only support UCS-2, not UTF-16. Windows,
> for example, has WCHAR_T which is not UTF-16 [...].

TTBOMK, this isn't true anymore. It's UTF-16 now, not UCS-2.

> REH

Schobi

REH

unread,
Oct 2, 2008, 9:43:10 AM10/2/08
to

Thanks. I guess I need to update my reference material. I haven't done
Windows programming since the NT days.

REH

James Kanze

unread,
Oct 2, 2008, 10:19:37 AM10/2/08
to

EUC. EUC (= Extended Unix Codes) is originally a multi-byte
code, but exists as a 32 bit code as well, see
http://docs.sun.com/app/docs/doc/802-1950/6i5us7asn?l=en&a=view.
It's apparently the standard encoding for wchar_t under Solaris
and HP/UX, and perhaps elsewhere as well. Thus, LATIN SMALL
LETTER E WITH ACUTE has the code 0x00E9 in Unicode, but
0x30000069 under Solaris. (``printf( "%04x\n", (unsigned
int)L'é''') -- the compiler apparently recognizes my
LC_CTYPE=iso_8859_1 locale for the file input.)

James Kanze

unread,
Oct 2, 2008, 10:21:31 AM10/2/08
to
On Oct 2, 12:39 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:

> James Kanze wrote:
> > On Oct 1, 11:59 am, Ioannis Vranos
> > <ivra...@no.spam.nospamfreemail.gr> wrote:

> >> So, since wstring supports the largest character set, why
> >> do we need explicit Unicode types in C++?

> > Because wstring doesn't guarantee Unicode, and implementers
> > can't change what it does guarantee in their particular
> > implementation.

> Again, if the implementers want Unicode, they can add a
> Unicode local and make wchar_t size large enough to match it.

And break their existing code base? They're not that
irresponsible (most of them, anyway). And the basic idea behind
wchar_t is that it is suppose to be locale independent, at least
for the encoding.

> In other words, C++0x could require all implementations to
> provide specific Unicode locales that will work with existing
> facilities (wchar_t, wstring, etc).

It could. It would also be ignored by most major implementors
if it did.

Ioannis Vranos

unread,
Oct 2, 2008, 10:27:10 AM10/2/08
to
James Kanze wrote:
>
> And break their existing code base? They're not that
> irresponsible (most of them, anyway). And the basic idea behind
> wchar_t is that it is suppose to be locale independent, at least
> for the encoding.

How would they "break" their existing code base, by adding some
additional locales and even changing the size of wchar_t?

Yannick Tremblay

unread,
Oct 2, 2008, 11:23:43 AM10/2/08
to
In article <gc0a5o$2rdr$1...@ulysses.noc.ntua.gr>,

There is no system that support "Unicode". you should go to:
http://www.unicode.org/standard/WhatIsUnicode.html

Unicode is basically a catalog of glyphs and associated numeric value.
for a computer system, it only make sense to be precise and talk about
UTF8, UTF16 or UTF32.
http://www.unicode.org/faq/utf_bom.html

A "Unicode" locale makes no sense because the
locale represent much more than simply the character encoding that is
being used.
http://www.unicode.org/reports/tr35/#Locale

How, and finally, MS-Windows misused the word Unicode to mean UTF16
(nowadays, in the past, they meant UCS2)

Yannick

Ioannis Vranos

unread,
Oct 2, 2008, 12:11:50 PM10/2/08
to
Yannick Tremblay wrote:
>
>>
>> I do not know much about encodings, only the necessary for me stuff, but
>> the question does not sound reasonable for me.
>>
>> If that system supports Unicode as a system-specific type, why can't
>> wchar_t be made wide enough as that system-specific Unicode type, in
>> that system?
>
> There is no system that support "Unicode". you should go to:
> http://www.unicode.org/standard/WhatIsUnicode.html
>
> Unicode is basically a catalog of glyphs and associated numeric value.
> for a computer system, it only make sense to be precise and talk about
> UTF8, UTF16 or UTF32.
> http://www.unicode.org/faq/utf_bom.html

I agree so far.


> A "Unicode" locale makes no sense because the
> locale represent much more than simply the character encoding that is
> being used.
> http://www.unicode.org/reports/tr35/#Locale


True, but I think Unicode locales could be implemented for characters
only, leaving the rest unchanged (as they are).


For example:


locale::global(locale("english"));


wcin.imbue(locale("UTF16"));
wcout.imbue(locale("UTF16"));


would change only the character set, keeping the rest of the locale
settings as they are either they were previously defined or they are the
default ones.

Pete Becker

unread,
Oct 2, 2008, 12:36:52 PM10/2/08
to
On 2008-10-02 06:34:25 -0400, Ioannis Vranos
<ivr...@no.spam.nospamfreemail.gr> said:

There's nothing there that requires wchar_t to be large enough to hold
Unicode code points. Certainly if an implementation supports a Unicode
local, wchar_t has to be large enough to handle those characters. But
the language definition doesn't require Unicode locales.

Ioannis Vranos

unread,
Oct 2, 2008, 1:11:09 PM10/2/08
to
Pete Becker wrote:
> On 2008-10-02 06:34:25 -0400, Ioannis Vranos
> <ivr...@no.spam.nospamfreemail.gr> said:
>
>> Pete Becker wrote:
>>> On 2008-10-01 12:57:27 -0400, Ioannis Vranos
>>> <ivr...@no.spam.nospamfreemail.gr> said:
>>>
>>>>
>>>> If that system supports Unicode as a system-specific type, why can't
>>>> wchar_t be made wide enough as that system-specific Unicode type, in
>>>> that system?
>>>
>>> It can be. But the language definition doesn't require it to be, and
>>> with many implementations it's not
>>
>>
>> C++03 mentions:
>>
>>
>> "Type wchar_t is a distinct type whose values can represent distinct
>> codes for all members of the *largest* extended character set
>> specified among the supported *locales* (22.1.1). Type wchar_t shall
>> have the same
>> size, signedness, and alignment requirements (3.9) as one of the other
>> integral types, called its underlying type".
>
> There's nothing there that requires wchar_t to be large enough to hold
> Unicode code points. Certainly if an implementation supports a Unicode
> local, wchar_t has to be large enough to handle those characters. But
> the language definition doesn't require Unicode locales.


Yes, I am talking about the upcoming Unicode character types in C++0x,
in comparison with the Unicode locales alternative.

Erik Wikström

unread,
Oct 2, 2008, 1:27:25 PM10/2/08
to
On 2008-10-02 12:26, Ioannis Vranos wrote:

> Erik Wikström wrote:
>>
>> Because it has been to narrow for 5 to 10 years and the compiler vendor
>> does not want to take any chances with backward compatibility,
>
>
> How will it break backward compatibility, if the size of whcar_t changes?

Because the user expects to be able to pack 5 wchar_t into a network-
message of a fixed size, or read a few characters from a specific
position in a binary file. Or any number of reasons where someone have
made assumptions about the size of wchar_t.

--
Erik Wikström

James Kanze

unread,
Oct 2, 2008, 2:57:27 PM10/2/08
to
On Oct 2, 6:11 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
> Yannick Tremblay wrote:

[...]


> > A "Unicode" locale makes no sense because the
> > locale represent much more than simply the character encoding that is
> > being used.
> >http://www.unicode.org/reports/tr35/#Locale

> True, but I think Unicode locales could be implemented for characters
> only, leaving the rest unchanged (as they are).

> For example:

> locale::global(locale("english"));

> wcin.imbue(locale("UTF16"));
> wcout.imbue(locale("UTF16"));

> would change only the character set, keeping the rest of the
> locale settings as they are either they were previously
> defined or they are the default ones.

That's not quite how locales work. What I think your talking
about is a UTF16 codecvt facet. And there are ways of
constructing a local by copying another locale, just replacing a
single facet. Of course, the ctype facet is also affected; part
of the problem in doing this cleanly is that abstractions that
we'd like to keep separate get mixed up. (Note that this can be
a problem even within a pure Unicode environment. Something
like toupper( 'i' ) is locale dependent, and will return a
different character in a Turkish locale.)

James Kanze

unread,
Oct 2, 2008, 2:58:55 PM10/2/08
to
On Oct 2, 4:27 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
> James Kanze wrote:

Adding locales is no problem. Changing the size, or anything
involving the behavior of wchar_t breaks real code. Some of the
code is probably poorly written, but convincing your customers
that they are idiots doesn't sell many compilers.

Ioannis Vranos

unread,
Oct 2, 2008, 3:06:26 PM10/2/08
to
James Kanze wrote:
>
> Adding locales is no problem. Changing the size, or anything
> involving the behavior of wchar_t breaks real code. Some of the
> code is probably poorly written, but convincing your customers
> that they are idiots doesn't sell many compilers.


OK, but if their badly-written code is broken, they will fix it.

Hendrik Schober

unread,
Oct 2, 2008, 3:43:14 PM10/2/08
to

For most of the past ten years I have written code that
had to be compiled using halve a dozen compiler/std lib
combinations on so many platforms. We had the very same
code carry UTF-8 strings on some Linux versions, UTF-16
on Windows, and UTF-32 on OSX and some other Unices. We
have learned to deal with all data types being platform-
dependent and our code needing to adapt.
Still, if your vendor does something stupid (like when VC
suddenly started to throw several 10k of useless warnings
for a 2MLoc code base that used to compile clean), you're
doomed.
And this isn't any different when you got yourself into
the trouble yourself. Even if you know that, 15 years ago,
some (people who had long left the company when you came,
and the company was a very different one back then, and
the code's been bought several times over) did something
stupid, it doesn't mean that, now you have several MLoC
relying on a specific size of some built-in type, you can
spend several man-years fixing this and take another two
releases until the dust has settled and all the bugs you
introduced doing so are fixed. While that would be nice
to do, the customers won't pay for it.

C++ has always respected the gazillions of lines of legacy
code real-world projects have. That's probably a reason
for its success.

Schobi

Hendrik Schober

unread,
Oct 2, 2008, 3:46:14 PM10/2/08
to
James Kanze wrote:
> On Oct 2, 12:21 pm, Hendrik Schober <spamt...@gmx.de> wrote:
>> James Kanze wrote:
> [...]

>>> In what encoding format? And what if the "usual" encoding for
>>> wstring isn't Unicode (the case on many Unix platforms).
>
>> <curious>
>> What are those implementations using for 'wchar_t'?
>> </curious>
>
> EUC. EUC (= Extended Unix Codes) is originally a multi-byte
> code, but exists as a 32 bit code as well, see
> http://docs.sun.com/app/docs/doc/802-1950/6i5us7asn?l=en&a=view.
> It's apparently the standard encoding for wchar_t under Solaris
> and HP/UX, and perhaps elsewhere as well. Thus, LATIN SMALL
> LETTER E WITH ACUTE has the code 0x00E9 in Unicode, but
> 0x30000069 under Solaris. (``printf( "%04x\n", (unsigned
> int)L'�''') -- the compiler apparently recognizes my

> LC_CTYPE=iso_8859_1 locale for the file input.)

Thanks!

Schobi

James Kanze

unread,
Oct 3, 2008, 3:47:23 AM10/3/08
to
On Oct 2, 9:06 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
> James Kanze wrote:

I don't guess you've ever worked in industry. The authors of
the code will claim that it's the compiler which is broken, and
find one which accepts it.

And of course, some of the code that would break probably isn't
broken. If you have no real portability requirements, and you
have a guarantee that wchar_t contains EUC, what's wrong about
programming against that. And you have that guarantee.

Practically speaking, it's easy to add new features---about the
only thing adding char32_t et al. can break is code which used
those symbols as keywords. Where as the standard, and vendor
specifications are a contract, which you really can't change
without wrecking havoc. And if you're a vendor, loosing sales.

James Kanze

unread,
Oct 3, 2008, 3:50:44 AM10/3/08
to
On Oct 2, 9:43 pm, Hendrik Schober <spamt...@gmx.de> wrote:

[...]


> C++ has always respected the gazillions of lines of legacy
> code real-world projects have. That's probably a reason
> for its success.

Were it only so. One of the reasons why there was so much
interest in Java was because it was so difficult to write
portable C++, and because the language was felt to be changing
under you. We've had to rework quite a bit of code, including
reorganizing some, because of two phase look-up, and the
differences between the classical iostream and the standard one
have caused more than a few problems as well.

0 new messages