>>> artist = u'B\xe9la Fleck'
>>> artist.encode('ascii', 'ignore')
'Bla Fleck'
However, I'd like to see the more sensible "Bela Fleck" instead of
dropping '\xe9' entirely. I believe this sort of translation can be
done using:
>>> artist.translate(XXXX)
The trick is finding the right XXXX. Has someone attempted this
before, or am I stuck writing my own solution?
You want ASCII, Dammit: http://www.crummy.com/cgi-bin/msm/map.cgi/ASCII
+Dammit
--
Brian Beck
Adventurer of the First Order
Why do you want only ASCII characters? What platform are you running
on?
If it's just a display problem, and the Unicode doesn't stray outside
the first 256 codepoints, you shouldn't have a problem e.g.
Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)]
on win32
[snip]
IDLE 1.1.3
>>> artist = u'B\xe9la Fleck'
>>> artist
u'B\xe9la Fleck'
>>> print artist
Béla Fleck
>>> import sys
>>> sys.stdout.encoding
'cp1252'
>>> print artist.encode('latin1')
Béla Fleck
On a *x box, using latin1 should work.
>
> However, I'd like to see the more sensible "Bela Fleck" instead of
> dropping '\xe9' entirely. I believe this sort of translation can be
> done using:
>
> >>> artist.translate(XXXX)
>
> The trick is finding the right XXXX. Has someone attempted this
> before, or am I stuck writing my own solution?
However if you really insist on having only ASCII characters, then
you've pretty much got to make up your own translation table. There was
a thread or two on this topic within the last few months. Merely
stripping off accents umlauts cedillas etc etc off most European
scripts where the basic alphabet is Roman/Latin is easy enough. However
some scripts use characters which are not Latin letters with detachable
decorations, and you will need 2 characters out for 1 in (e.g. German
eszett, Icelandic thorn (the name of the god with the hammer is shown
in ASCII as Thor, not Por!)). Scripts like Greek and Cyrillic would
need even more work
HTH,
John
In this specific example, there is a different approach, using
the Unicode character database:
def strip_combining(s):
import unicodedata
# Expand pre-combined characters into base+combinator
s1 = unicodedata.normalize("NFD", s)
r = []
for c in s1:
# add all non-combining characters
if not unicodedata.combining(c):
r.append(c)
return u"".join(r)
py> a.strip_combining(u'B\xe9la Fleck')
u'Bela Fleck'
As the accented characters get decomposed into base character
plus combining accent, this strips off all accents in the
string.
Of course, it is still fairly limited. If you have non-latin
scripts (Greek, Cyrillic, Arabic, Kanji, ...), this approach
fails, and you would need a transliteration database for them.
There is non built into Python, and I couldn't find a
transliteration database that transliterates all Unicode characters
into ASCII, either.
Regards,
Martin
Assuming the data are in latin-1 or can be converted to it, try my latscii
codec:
http://orca.mojam.com/~skip/python/latscii.py
Skip