bulk upload crashes with dev server but works when upload to domain

25 views
Skip to first unread message

sjh

unread,
Jul 2, 2012, 1:54:44 AM7/2/12
to google-a...@googlegroups.com
Hi,

I wrote an application a while back that did a bulk upload. It worked with dev_appserver and on my domain. Now the dev_appserver crashes when I try and upload a few entries. I remember the dev store was a nightmare because of authentication. You had to first open the location of the remote_api in your browser, put in a username and pw and then it would work. I managed to remember all of that again but I cannot upload the values into the dev_appserver -- it just keeps crashing. Note I am using 64-bit and python 2.7 (before I was using 32-bit and python 2.5). Here is the code to upload to the local server:

appcfg.py upload_data --config_file=spectrum_loader.py --filename=spectrum_data.csv --kind=Spectrum --url=http://localhost:8080/theremote_api ./

and this is the error that comes back from the from the application also running on 8080.

Uploading data records.
[INFO    ] Logging to bulkloader-log-20120701.224621
[INFO    ] Throttling transfers:
[INFO    ] Bandwidth: 250000 bytes/second
[INFO    ] HTTP connections: 8/second
[INFO    ] Entities inserted/fetched/modified: 20/second
[INFO    ] Batch Size: 10
[INFO    ] Opening database: bulkloader-progress-20120701.224621.sql3
Please enter login credentials for localhost
Password for ju...@appengine.com:
[INFO    ] Connecting to localhost:8080/theremote_api
[INFO    ] Starting import; maximum 10 entities per post
[ERROR   ] [WorkerThread-4] WorkerThread:
Traceback (most recent call last):
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\ad
aptive_thread_pool.py", line 176, in WorkOnItems
    status, instruction = item.PerformWork(self.__thread_pool)
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bu
lkloader.py", line 764, in PerformWork
    transfer_time = self._TransferItem(thread_pool)
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bu
lkloader.py", line 935, in _TransferItem
    self.request_manager.PostEntities(self.content)
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bu
lkloader.py", line 1420, in PostEntities
    datastore.Put(entities)
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\data
store.py", line 579, in Put
    return PutAsync(entities, **kwargs).get_result()
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastor
e\datastore_rpc.py", line 809, in get_result
    results = self.__rpcs[0].get_result()
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apip
roxy_stub_map.py", line 604, in get_result
    return self.__get_result_hook(self)
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastor
e\datastore_rpc.py", line 1579, in __put_hook
    self.check_rpc_success(rpc)
  File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastor
e\datastore_rpc.py", line 1216, in check_rpc_success
    raise _ToDatastoreError(err)
BadRequestError: app "dev~myprog" cannot access app "myprog"'s data
[INFO    ] [WorkerThread-3] Backing off due to errors: 1.0 seconds
[INFO    ] An error occurred. Shutting down...
[ERROR   ] Error in WorkerThread-4: app "dev~myprog" cannot access app "specl
ib01"'s data

[INFO    ] 9 entities total, 0 previously transferred
[INFO    ] 0 entities (3494 bytes) transferred in 21.4 seconds
[INFO    ] Some entities not successfully transferred

Any ideas would be much appreciated. I could just debug on the domain server but it defeats the purpose of the development server -- very frustrating!

thanks, sjh

Reply all
Reply to author
Forward
0 new messages