I have to export a Oracle DB (schema I should say) which character set is
"we8iso8859p1" into an existing instance encoded in UTF-8.
This generate data inconsistency and, therefore data loss.
I did not find a way out so far. Neither did DBAs in my company .
Did some of you come across such a problem ?
Is there a way to solve it ? If so, please drop me a hint.
Regards.
--
Olivier
http://e-cologis.com
French free find-a-roommate service.
If the data is valid ISO-8859-1 encoded data, it should be no problem
to import it into a UTF-8 database-- UTF-8 can encode every character
that ISO-8859-1 can plus a lot more characters. If there are
problems, my first concern would be that the data in the ISO-8859-1
database is corrupt and is not actually encoded in the ISO-8859-1
character set. Are you sure that the data in the database is actually
ISO-8859-1 encoded data? Are you storing anything other than English
or Western European data?
If data corruption isn't the problem, how are you moving the data
between the databases?
Justin Cave
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC
export with NLS_LANG set to we8iso8859p1 ...
and import with NLS_LANG set to we8iso8859p1.
What are you setting the NLS_LANG to while exp/imp?
Anurag
Hi,
Normally an export from P1 to UTF-8 should not have conversion
problems though truncation can occur as 8 bit characters are converted
to 2 bytes in UTF-8. Ofthen though invalid data does get into the
database and can not be properly converted. To understand the issues
and the possilble solutions take a look at the following white paper
which can be found on the OTN globalization home page at:
http://otn.oracle.com/tech/globalization/pdf/mwp.pdf