Do not use that it is old..
now facebook is supported inside web2py distribution with oauth20_account.py
you can find an example app here:
http://code.google.com/r/michelecomitini-facebookaccess/source/browse/#hg/applications/helloFacebook
for a simple example usage of graph api look here:
http://code.google.com/r/michelecomitini-facebookaccess/source/browse/applications/helloFacebook/models/grafb.py
http://code.google.com/r/michelecomitini-facebookaccess/source/browse/applications/helloFacebook/controllers/graph.py
http://code.google.com/r/michelecomitini-facebookaccess/source/browse/#hg/applications/helloFacebook/views/graph
for the redirection after login I am investigating...
mic
2010/8/20 Narendran <gunan...@gmail.com>:
diff -r 9261ce4eda7f gluon/tools.py
--- a/gluon/tools.py Thu Aug 19 04:13:54 2010 +0200
+++ b/gluon/tools.py Sun Aug 22 00:08:25 2010 +0200
@@ -982,7 +983,7 @@
request = self.environment.request
args = request.args
if not args:
- redirect(self.url(args='login'))
+ redirect(self.url(args='login', vars=request.vars))
elif args[0] in self.settings.actions_disabled:
raise HTTP(404)
if args[0] == 'login':
2010/8/20 Michele Comitini <michele....@gmail.com>:
2010/8/22 Michele Comitini <michele....@gmail.com>:
tnx!
2010/8/22 Michele Comitini <michele....@gmail.com>:
they should work here is an updated version of the example app running on GAE:
http://grafbook.appspot.com/helloFacebook/graph
if it works it should mantain the url above even after authentication with fb.
mic
2010/8/27 mdipierro <mdip...@cs.depaul.edu>:
This is my first foray into web2py. I am pretty comfortable with
python, but it's never been my primary language.
In any case I am building a Recipe database and UI for it which will
feed a number of different systems and allow non-tech users to add and
manipulate the recipes.
Since this is a publishing house the 'recipes' really have a lot more
associated with them than what you would generally assume thinking about
recipes.
This is replacing a shaky, older filemaker solution and a lot of the
data is thrown all over the place, and there are multiple tables that
are involved (nutrition information for example in a separate table) and
the import process is going to involve parsing/cleaning almost-csv files
and extracting information non-normalized data.
I guess this process should happen just at the python shell? Or map a
URL to a function maybe? This will likely be an iterative process with
some amount of wiping the db and going again.
Looking for any tips here. Also, is it appropriate to send code for
review into this list from time to time?
Thanks a lot
Geoff