Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Dealing with "funny" characters

3 views
Skip to first unread message

sophie_newbie

unread,
Oct 20, 2007, 9:13:44 AM10/20/07
to
Hi, I want to store python text strings that characters like "é" "Č"
in a mysql varchar text field. Now my problem is that mysql does not
seem to accept these characters. I'm wondering if there is any way I
can somehow "encode" these characters to appear as normal characters
and then "decode" them when I want to get them out of the database
again?

-Thanks.

Paul Hankin

unread,
Oct 20, 2007, 10:04:10 AM10/20/07
to

Use unicode strings, and use the 'encode' and 'decode' methods. Mysql
has utf8 support for varchars, so that seems a likely encoding to use.
If you're stuck with ascii, you can use 'unicode_escape' if you're
only accessing it from python.

But first read this excellent article on 'funny' characters:
http://www.joelonsoftware.com/articles/Unicode.html

--
Paul Hankin

Gert-Jan

unread,
Oct 20, 2007, 10:04:22 AM10/20/07
to pytho...@python.org
sophie_newbie schreef:


It seems you'll have to use Unicode in your program rather than 'plain'
strings.

Before storing an unicode textstring in a database or a file, you must
encode it using an appropriate encoding/codepage, for example:

outputstring = unicodeobject.encode('utf-8')


This is not an easy subject. But it isn't very hard either. Please
search the Python documentation using these keywords: Unicode, decoding
and encoding.


But you should read this first:
http://www.joelonsoftware.com/articles/Unicode.html
It's an entertaining introduction to the world of codepages and unicode :-)


Regards,
Gert-Jan

John Nagle

unread,
Oct 20, 2007, 11:42:15 AM10/20/07
to
Gert-Jan wrote:
> sophie_newbie schreef:
>> Hi, I want to store python text strings that characters like "é" "Č"
>> in a mysql varchar text field. Now my problem is that mysql does not
>> seem to accept these characters. I'm wondering if there is any way I
>> can somehow "encode" these characters to appear as normal characters
>> and then "decode" them when I want to get them out of the database
>> again?
>
>
> It seems you'll have to use Unicode in your program rather than 'plain'
> strings.
>
> Before storing an unicode textstring in a database or a file, you must
> encode it using an appropriate encoding/codepage, for example:
>
> outputstring = unicodeobject.encode('utf-8')

No, no, that's wrong. MySQL and the Python interface to it understand
Unicode. You don't want to convert data to UTF-8 before putting it in a
database; the database indexing won't work.

Here's how to do it right.

First, tell MySQL, before you create your MySQL tables, that the tables are
to be stored in Unicode:

ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;

You can also do this on a table by table basis, or even for single fields,
but you'll probably get confused if you do.

Then, when you connect to the database in Python, use something like this:

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)

That tells MySQLdb to talk to the database in Unicode, and it tells the database
(via "charset") that you're talking Unicode.

Within Python, you can use Unicode as well. If you have a Unicode text
editor, you can create Python source files in Unicode and have Unicode text
constants in quotes. If you do this, you should put

# -*- coding: UTF-8 -*-

as the first line of Python files. Quoted constants should be written
as

s = u'Test'

rather than

s = 'Test'

Instead of "str()", use "unicode()".

Once everything is set up like this, you can pass Unicode in and out of MySQL
databases freely, and all the SQL commands will work properly on Unicode data.

John Nagle

Diez B. Roggisch

unread,
Oct 20, 2007, 12:28:31 PM10/20/07
to
> No, no, that's wrong. MySQL and the Python interface to it understand
> Unicode. You don't want to convert data to UTF-8 before putting it in a
> database; the database indexing won't work.

I doubt that indexing has anything to do with it whatsoever.

> Here's how to do it right.
>
> First, tell MySQL, before you create your MySQL tables, that the
> tables are
> to be stored in Unicode:
>
> ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;
>
> You can also do this on a table by table basis, or even for single fields,
> but you'll probably get confused if you do.
>
> Then, when you connect to the database in Python, use something like
> this:
>
> db = MySQLdb.connect(host="localhost",
> use_unicode = True, charset = "utf8",
> user=username, passwd=password, db=database)
>
> That tells MySQLdb to talk to the database in Unicode, and it tells the
> database
> (via "charset") that you're talking Unicode.

You confuse unicode with utf-8 here. And while this appears to be
nitpicking, it is important to write this small program and meditate the
better part of an hour in front of it running:

while True:
print "utf-8 is not unicode"


You continue to make that error below, so I snip that.

The important part is this: unicode is a standard that aims to provide a
codepoint for each and every character that humankind has invented. And
python unicode objects can also represent all characters one can imagine.

However, unicode as such is an abstraction. Harddisks, network sockets,
databases and the like don't deal with abstractions though - the eat
bytes. Which makes it necessary to encode unicode objects to
byte-strings when serializing them. Thus there are the thingies called
encodings: latin1 for most characters used in westen europe for example.
But it is limited to 256 characters (actually, even less), chinese or
russian customers won't get too happy with them.

So some encodings are defined that are capable of encoding _ALL_ unicode
codepoints. Either by being larger than one byte for each character. Or
by providing escape-mechanisms. The former are e.g. UCS4 (4 bytes per
character), the most important member of the latter is utf-8. Which uses
ascii + escapes to encode all codepoints.

Now what does that mean in python?

First of all, the coding:-declaration: it tells python which encoding to
use when dealing with unicode-literals, which are the

u"something"

thingies. If you use a coding of latin1, that means that the text

u"ö"

is expected to be one byte long, with the proper value that depicts the
german umlaut o in latin1. Which is 0xf6.

If coding: is set to utf-8, the same string has to consist not of one,
but of two bytes: 0xc3 0xb6.

So, when editing files that are supposed to contain "funny" characters,
you have to

- set your editor to save the file in an appropriate encoding

- specify the same encoding in the coding:-declaration

Regarding databases: they store bytes. Mostly. Some allow to store
unicode by means of one of the fixed-size-encodings, but you pay a
storage-size penalty for that.

So - you we're right when you said that one can change the encoding a db
uses, on several levels even.

But that's not all that is to it. Another thing is the encoding the
CONNECTION expects byte-strings to be passed, and will use to render
returned strings in. The conversion from and to the used storage
encoding is done automagically.

It is for example perfectly legal (and unfortunately happens
involuntarily) to have a database that internally uses utf-8 as storage,
potentially being able to store all possible codepoints.

But due to e.g. environmental settings, opened connections will deliver
the contents in e.g. latin1. Which of course will lead to problems if
you try to return data from the table with the topmost chines first names.

So you can alter the encoding the connection delivers and expects
byte-strings in. In mysql, this can be done explcit using

cursor.execute("set names <encoding>")

Or - as you said - as part of a connection-string.

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)


But there is more to it. If the DB-API supports it, then the API itself
will decode the returned strings, using the specified encoding, so that
the user will only deal with "real" unicode-objects, greatly reducing
the risk of mixing byte-strings with unicode-objects. That's what the
use_unicod-parameter is for: it makes the API accept and deliver
unicod-objects. But it would do so even if the charset-parameter was
"latin1". Which makes me repeat the lesson from the beginning:

while True:
print "utf-8 is not unicode"


Diez

John Nagle

unread,
Oct 21, 2007, 12:56:50 PM10/21/07
to
Diez B. Roggisch wrote:
>> No, no, that's wrong. MySQL and the Python interface to it understand
>> Unicode. You don't want to convert data to UTF-8 before putting it in a
>> database; the database indexing won't work.
>
> I doubt that indexing has anything to do with it whatsoever.

Of course it does. ORDER BY, LIKE, TRIM, and other SQL expressions that
do more than an equal comparison need to know the actual data representation.
If you were to convert to UTF-8 or UCS-2 in the Python program and send
the resulting byte string to MySQL, with MySQL thinking it was storing
ASCII or a BLOB, many SQL functions won't work right. A database is
not a file system; a database looks at the data.


>
>> Here's how to do it right.
>>
>> First, tell MySQL, before you create your MySQL tables, that the
>> tables are
>> to be stored in Unicode:
>>
>> ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;
>>
>> You can also do this on a table by table basis, or even for single
>> fields,
>> but you'll probably get confused if you do.
>>
>> Then, when you connect to the database in Python, use something
>> like this:
>>
>> db = MySQLdb.connect(host="localhost",
>> use_unicode = True, charset = "utf8",
>> user=username, passwd=password, db=database)
>>
>> That tells MySQLdb to talk to the database in Unicode, and it tells
>> the database
>> (via "charset") that you're talking Unicode.
>
> You confuse unicode with utf-8 here.

... pontification deleted

> But due to e.g. environmental settings, opened connections will deliver
> the contents in e.g. latin1. Which of course will lead to problems if
> you try to return data from the table with the topmost chines first names.

???


>
> So you can alter the encoding the connection delivers and expects
> byte-strings in. In mysql, this can be done explcit using
>
> cursor.execute("set names <encoding>")
>
> Or - as you said - as part of a connection-string.
>
> db = MySQLdb.connect(host="localhost",
> use_unicode = True, charset = "utf8",
> user=username, passwd=password, db=database)
>
>
> But there is more to it. If the DB-API supports it, then the API itself
> will decode the returned strings, using the specified encoding, so that
> the user will only deal with "real" unicode-objects, greatly reducing
> the risk of mixing byte-strings with unicode-objects. That's what the
> use_unicod-parameter is for: it makes the API accept and deliver
> unicod-objects. But it would do so even if the charset-parameter was
> "latin1".

Yes, and that's taken care of if the connection is set up as above.
For legacy reasons, there are both "unicode" and "charset" parameters
to "connect", and they can be set in incompatible ways.
Also, you need MySQL 5 or later to get full Unicode support.

John Nagle

Diez B. Roggisch

unread,
Oct 22, 2007, 6:25:54 AM10/22/07
to
>> I doubt that indexing has anything to do with it whatsoever.
>
> Of course it does. ORDER BY, LIKE, TRIM, and other SQL expressions
> that
> do more than an equal comparison need to know the actual data
> representation. If you were to convert to UTF-8 or UCS-2 in the Python
> program and send the resulting byte string to MySQL, with MySQL thinking
> it was storing
> ASCII or a BLOB, many SQL functions won't work right. A database is
> not a file system; a database looks at the data.

Garbage in, garbage out. But putting correctly encoded data into it won't
make any troubles, so "You don't want to convert data to UTF-8 before
putting it in a
database; the database indexing won't work." is utter nonsense.

>> You confuse unicode with utf-8 here.
> ... pontification deleted

Pontication in contrast to what - your highly informative posts like this?
http://mail.python.org/pipermail/python-list/2007-October/461375.html

I'm sure there are other daily routines your audience here can't wait to be
informed of in regular intervals.

Just because you write nonsense like

"""
    First, tell MySQL, before you create your MySQL tables, that the tables
are
to be stored in Unicode:

        ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;
"""

confusing unicode with an encoding of it doesn't make me pontificate.

Diez

Chris Mellon

unread,
Oct 22, 2007, 2:44:27 PM10/22/07
to pytho...@python.org
On 10/20/07, John Nagle <na...@animats.com> wrote:
> Gert-Jan wrote:
> > sophie_newbie schreef:
> >> Hi, I want to store python text strings that characters like "é" "Č"
> >> in a mysql varchar text field. Now my problem is that mysql does not
> >> seem to accept these characters. I'm wondering if there is any way I
> >> can somehow "encode" these characters to appear as normal characters
> >> and then "decode" them when I want to get them out of the database
> >> again?
> >
> >
> > It seems you'll have to use Unicode in your program rather than 'plain'
> > strings.
> >
> > Before storing an unicode textstring in a database or a file, you must
> > encode it using an appropriate encoding/codepage, for example:
> >
> > outputstring = unicodeobject.encode('utf-8')
>
> No, no, that's wrong. MySQL and the Python interface to it understand
> Unicode. You don't want to convert data to UTF-8 before putting it in a
> database; the database indexing won't work.
>

Just for the record, if MySQL supports ucs-2 and/or ucs-4, using the
one that matches your Python build is likely to give you better
runtime performance.

0 new messages