unable to open database file

22 views
Skip to first unread message

Karl Norrena

unread,
Dec 19, 2016, 11:34:18 PM12/19/16
to ScraperWiki
I am a new user to scraperwiki. I lifted some scraperwiki code and found a nd fixed a few errors, but after several days of trying I still have one: unable to open database file. I am running windows 10, anaconda2, and running my python code under the rodeo ide. Any help would be helpful at this point.

Karl

Here is the code:

import scraperwiki
import lxml.html as lh

print "First Line *************************************************"
#print html

root = lh.fromstring(html)

# parse the page - note that this html is broken (img tags not terminated), so it need to use the "parse" method
plant = dict()

# find all the row elements in the page
# note: the name of the facility is a "tail" of the img tag because of the broken html
rows = tree.findall("//tr")

for id, cells in enumerate(rows):
    
    #print len(cells), cells[0].text

    if len(cells) == 1:
        genType = cells[0].text
        plant["type"] = genType
        continue

    # only extract data from tables with four columns
    if len(cells) == 4:

        # only store Wind
        if genType <> "WIND": continue

        # the header row has no data, so skip it
        name = cells[0].text.strip()
        if len(name) < 1:
            continue
        
        # has an image?
        #imgs = cells[0].findall(".//img")
        #if len(imgs) > 0:
        #    name = imgs[0].tail.strip()
        #    print "NAME *****************************************"
        #    print name
            #else:
             #   name = cells[0].text.strip()

        # the header row has no data, so skip it
        #if not name: continue

        plant["name"] = name
        plant["mc"] = float(cells[1].text.strip())
        plant["tng"] = float(cells[2].text.strip())
        plant["dcr"] = float(cells[3].text.strip())

        #print plant
        #scraperwiki.sql.save(['C:\Users\karln\Documents\Python\name.db'], plant)
        #scraperwiki.sql.save(['url'], post)
        scraperwiki.sqlite.save(unique_keys = ["name"], data = plant, table_name="name")

Aine McGuire

unread,
Dec 20, 2016, 3:49:54 AM12/20/16
to scrap...@googlegroups.com
Hello Karl,

Thank you for your message and for using our service :-)

I'll pass this to our engineers and see if we can shed some led on the problem.

As we are close to Christmas - many people are away on holiday so I'd like to ask for your patience with us.

Kindest regards

Aine McGUIRE


--
You received this message because you are subscribed to the Google Groups "ScraperWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scraperwiki+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Thank you 
@ainemcguire
+44 (0)7710 377929

SensibleCode.io makes products that turn messy information into valuable data
We make PDFTables.com and QuickCode.io

This message and any attachments are confidential. If you are not the intended recipient, please telephone or e-mail the sender and delete this message and any attachments from your system. If you are not the intended recipient you must not copy this message and attachments or disclose the contents to any other person.  Registered Address 
James House, Stonecross Business Park, Yew Tree Way, Warrington , Cheshire. WA3 3JD. Registered No:06979284.
Reply all
Reply to author
Forward
0 new messages