special characters and encoding problems

230 views
Skip to first unread message

ALI BEN ABDALLAH

unread,
Nov 19, 2014, 5:43:14 PM11/19/14
to scrapy...@googlegroups.com
Hello,
Please i need help.


i'm just beguining with scrapy crawling. i developed a simple spider that get article from a journal web site but i have a big problem with the encoding of the special characters and in every line of text i have the  [u" "].
The original encoding of the thext is in UTF-8 and when i crawl text i mentioned in python script also  this:

#!/usr/bin/python
# -*- coding: utf-8 -*-

But i still have the same problem.

This's an example of the output of my spider:
--------------------------------------------------------------------------
{'article_body': u'[u"Le feuilleton du scandale alimentaire au Liban se poursuit \\xe0 un rythme soutenu. Il ne se passe en effet pas un seul jour sans que de nouvelles r\\xe9v\\xe9lations concernant des violations \\xe0 la s\\xe9curit\\xe9 alimentaire ne soient faites, alimentant la pol\\xe9mique sur les responsabilit\\xe9s dans un secteur livr\\xe9 \\xe0 l\'anarchie depuis des d\\xe9cennies.", u\'Le ministre de la Sant\\xe9, Wa\\xebl Bou Faour, qui a fait \\xe9clater sa premi\\xe8re bombe \', u\'le 11 novembre\', u\', provoquant une onde de choc parmi la population, est revenu mercredi \\xe0 la charge en rendant publique une nouvelle longue liste \', u\'(voir ici)\', u\' d\\u2019\\xe9tablissements vendant certains produits non conformes aux normes. \', u\'(Voir les pr\\xe9c\\xe9dentes liste \', u\'ici\', u\' et \', u\'ici\', u\').\', u\'\\xa0\', u\'(Lire aussi:\', u\' \', u\'S\\xfbret\\xe9 alimentaire au Liban : un diagnostic inqui\\xe9tant)\', u\'\\xa0\', u\'M. Bou Faour a aussi, lors de sa conf\\xe9rence de presse, demand\\xe9 au minist\\xe8re de l\\u2019Int\\xe9rieur la fermeture de plusieurs abattoirs qui ne r\\xe9pondent pas aux normes de la s\\xe9curit\\xe9 alimentaire. Le ministre a nomm\\xe9 les abattoirs de Akbiy\\xe9, Baisriy\\xe9 et Ghaziy\\xe9, au Liban-Sud.\', u\'Le mohafez de Beyrouth a annonc\\xe9 mardi soir la \', u"fermeture officielle de l\'abattoir de Beyrouth", u", une annonce faite lors d\'une \\xe9mission t\\xe9l\\xe9vis\\xe9e sur la cha\\xeene LBCI dont l\'invit\\xe9 \\xe9tait le ministre de la Sant\\xe9.", u\'"Nous ne savons pas si l\\\'usine de broiement des os (li\\xe9e \\xe0 l\\\'abattoir de Beyrouth) a \\xe9t\\xe9 ferm\\xe9e, nous n\\\'acceptons pas des mesures cosm\\xe9tiques, mais des mesures radicales. Le probl\\xe8me de l\\u2019abattoir de Beyrouth ne r\\xe9side pas uniquement dans l\\u2019odeur et la propret\\xe9 mais aussi dans l\\u2019eau tr\\xe8s chlor\\xe9e", a soulign\\xe9 M. Bou Faour lors de sa conf\\xe9rence de presse, affirmant qu\\\'il allait soumettre au gouvernement une proposition\\xa0pour la construction d\\\'un nouvel abattoir au m\\xeame emplacement que l\\\'ancien.\', u\'\\xa0\', u\'(Lire aussi : \', u\'Scandale alimentaire : le coup de gr\\xe2ce \\xe0 un secteur en souffrance ?\', u\')\', u\'\\xa0\', u\'Le ministre de la Sant\\xe9, qui a aussi annonc\\xe9 sa d\\xe9cision de fermer une usine d\\u2019abattage de poulet \\xe0 Sa\\xefda, a r\\xe9p\\xe9t\\xe9 que sa croisade contre les aliments non conformes aux normes allait se poursuivre coute que coute.\', u"Alors que les affaires d\'empoisonnement alimentaires sont fr\\xe9quentes au Liban, l\\u2019Agence nationale d\\u2019information (Ani, officielle) a rapport\\xe9 mercredi que 10 personnes avaient \\xe9t\\xe9 hospitalis\\xe9es apr\\xe8s avoir mang\\xe9 des fajita et du taouk \\xe0 Halba, dans le Akkar.", u\'\\xa0\', u\'',
 'article_title': u'Scandale alimentaire : Bou Faour continue sur sa lanc\xe9e, une nouvelle liste rendue publique',
 'auteur': u'OLJ',
 'date_publication': u'19/11/2014',
 'section': u'\xc0 La Une',
 'url': u'http://www.lorientlejour.com/article/896879/scandale-alimentaire-bou-faour-continue-sur-sa-lancee-une-nouvelle-liste-rendue-publique.html'}
-------------------------------------------------------------------------------------------------

this is my code:spider:



::::::::::::::::::::::::::::::::::::::::::::::
#!/usr/bin/python
# -*- coding: utf-8 -*-

import scrapy
import unicodedata
from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import BaseItem
from scrapy.contrib.loader import ItemLoader
from scrapy.contrib.loader.processor import Compose, Join, MapCompose, TakeFirst
from w3lib.html import replace_escape_chars, remove_tags
from scrapy.contrib.spiders import CrawlSpider, Rule
from orient.items import ArticleLoader
from scrapy.contrib.loader import XPathItemLoader
from orient.removescripts import RemoveScripts

from scrapylib.processors import default_input_processor, default_output_processor
from scrapy.utils.python import unicode_to_str



def tag(values):
    v = values.rsplit('Lire aussi', 1)[0]
    return v

class WhitehorseLoader(ArticleLoader):
    default_input_processor = default_input_processor
    default_output_processor = default_output_processor
    article_title_out=Join(' ')
    #article_body_in = Compose(default_input_processor)
  
    article_body_out= Join(' ')
    #article_body_out=MapCompose(lambda string: string.strip())
class OrientSpider(CrawlSpider):
    name = "orient"
    allowed_domains = r["lorientlejour.com"]
    login_page = u'http://www.lorientlejour.com/account/login.php'
    start_urls = ['http://www.lorientlejour.com']
    rules = (Rule(SgmlLinkExtractor(allow=('/article/')), callback='parse_url', follow=False), )
              
   
  
    def parse_url(self, response):

        xpath = HtmlXPathSelector(response)
        item_loader = WhitehorseLoader(response=response)
        item_loader.add_xpath('auteur', str(('//div[@class="attributes"]/text()')))
        item_loader.add_xpath('date_publication', str(('//div[@class="date"]/text()')))
        item_loader.add_value('url', str((response.url).decode("utf-8")))
        item_loader.add_xpath('section', str('//div[@class="mainbar articlePage"]/h2/a/text()'))
        item_loader.add_xpath('article_title', str(('//div[@class="mainbar articlePage"]/article/h1/text()')).encode("utf-8"))
        item_loader.add_xpath('article_body',tag(str(('//div[@class="text"]/p//text()'))))
        item_loader.add_value('article_body', tag(str(value)))
      
        return str((item_loader.load_item()))
       
      

:::::::::::::::item


# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
from scrapy import Item, Field
from scrapy.contrib.loader import ItemLoader
from scrapy.contrib.loader.processor import TakeFirst


class ArticleItem(scrapy.Item):
    # define the fields for your item here like:
 
    url = Field()
    auteur = Field()
    date_publication = Field()
    section = Field()
    article_title = Field()
    article_body = Field()
   

class ArticleLoader(ItemLoader):
    default_item_class = ArticleItem
    default_output_processor = TakeFirst()

lnxpgn

unread,
Nov 20, 2014, 11:14:53 AM11/20/14
to scrapy...@googlegroups.com
This is Unicode string, this is Scrapy internal encoding, you can convert to an encoding you want.
--
You received this message because you are subscribed to the Google Groups "scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scrapy-users...@googlegroups.com.
To post to this group, send email to scrapy...@googlegroups.com.
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

ALI BEN ABDALLAH

unread,
Nov 20, 2014, 11:31:21 AM11/20/14
to scrapy...@googlegroups.com
Hello,
Thanks for response, i convert it to utf-8 but i have the same problem.

item_loader.add_xpath('article_title', str(('//div[@class="mainbar articlePage"]/article/h1/text()')).encode("utf-8"))
when i do response .encoding i have utf-8 so that the original encoding of data is also utf-8.
do have idea, haw i can convert it correctly ?
thanks

lnxpgn

unread,
Nov 21, 2014, 4:56:53 AM11/21/14
to scrapy...@googlegroups.com
def parse_url(self, response):
       # other code
       article_title = response.xpath('//div[@class="mainbar articlePage"]/article/h1/text()')
       # now article_title is an Unicode string, Scrapy has convert it from utf-8 automatically. if no problem, you don't need to pay attention to the encoding
       # if you want find a sub-string
       # no need to convert anything
       if u'universités' in article_title:
             # do something
       # other code

if you want to save crawled content, could convert these Unicode strings to utf-8 strings before writing to disk.
--
Reply all
Reply to author
Forward
0 new messages