I've known C/C++ for years, but only ever used ascii strings. I have a
client who wants to know how gcc handles unicode. I've found the functions
utf8_mbtowc, utf8_mbstowcs, utf8_wctomb and utf8_wcstombs, but I'm
wondering if there are any other libraries or functions which can do things
like handle different kinds of encodings?
Thanks
Michael Davis
There is iconv.
Thanks!
md
E.g. to find a name of the available locale:
...
#include <locale>
...
..
.
try
{
locale AvailLocale("german");
cout << AvailLocale.name() << endl;
}
catch(runtime_error& e )
{
cout << e.what() << endl;
}
You should get something like this :
German_Germany.1252(in Windows)
de_DE.iso8859-1(in Unix/Linux)
See
http://cvs.sourceforge.net/viewcvs.py/dmx/dmx/xc/nls/locale.alias?rev=1.1.1.3
for more detailed list.
To save a pure Unicode string to file you need to upgrade STL
http://www.codeproject.com/vcpp/stl/upgradingstlappstounicode.asp?print=true
or to use C-like way (fwrite) but it is not common way of doing that -
it is platform dependent.
Use available locales, e.g.:
locale Ger("German_Germany.1252");
wcout.imbue(Ger); //attach locale to stream
wstring ws(L"A german text...");
wcout << ws << endl;
//to get a current locale of a stream use:
CurrentLocale = wcout.getloc();
It is good to use a text editor that can display/manage these locales.
Also visit
http://www.langer.camelot.de/Articles/Cuj/Internationalization/I18N.html
By 'Unicode' you mean UTF-16, right?
Unfortunately, while the various W-versions of the functions can
support wide char (presumably some UNICODE version) strings. Most
of the major C++ interfaces don't support it. The assumption of
the standardizer is there some mutibyte-char type that you can use
for the system interfaces. It's really stupid and causes a pain
in the butt on systems that really don't have that mapping (like
Windows).
By 'Unicode' he should mean wide characters of an unspecified encoding. On
my compiler, it's definitely not UTF-16, because wchar_t is 32bits.
Non-Wide Characters - reprezented with CHAR:
Many charsets (ISO 8859-1, ISO 8859-2, ...) include 256 characters - it
means that it is not possible to cover every language in such small
number of characters. But many applications are not able to manage
Unicode at this time so use some of encodings/character representations
available in your OS:
standardized charsets ISO 8859...
or windows-125X ...
or Mac x-mac-ce ...etc
or UTF-8.
UTF? yes but it is reprezented with WIDE CHAR.
UTF-8 is a way how to write a character to file: ASCII characters are
represented with one byte and other characters are represented with
more than one byte.
example: 11000011-10101101
UTF-16: All characters are represented with two bytes. Some of those
characters have a special meaning.
example: 11101101-00000000
To represent all languages as much as possible use wchar_t (one
character), wstring (string). These types are __usually__ able to cover
all characters in Unicode standard with 4 bytes but it can be also 2
bytes. w means wide characters. To use them you have to use streams for
wide characters. Please see std::locale, std::locale::facet. When
using w-objects you have to be sure about your current
encoding/charset.
Usually we express text in programs with CHARs (We can be happy enough
with chars) but sometime we want to use a different language, very
different language that is not covered in the available encoding (with
256 characters, windows-125X, ISO88...). We can handle text in program
like Unicode set (and we can be happy as well) but we (in C++) usually
write to file using available encoding (non-Unicode)in our OS because
it is not possible when using std::. One way is
http://www.codeproject.com/vcpp/stl/upgradingstlappstounicode.asp?print=true
another way is using C function fwrite:
wchar_t myWString[] = L"Some strange characters."
fwrite(myWString, sizeof(wchar_t), sizeof(myWString)/sizeof(wchar_t),
myFile );
but is is not portable.