In SQL Server, you can run queries across database tables following the syntax you mention if you have permissions to access both databases and they are on the same server. I presume the same is true on other (not all) database engines. If you have the SQL, then use the executesql feature and you are golden.
In a module or controller...
from gluon.contrib.pypyodbc import ProgrammingError
from gluon import *
import traceback
def run(sql, *params, **kw):
res = SQLResponse()
if sql == None:
res.setError('SQL was None')
return res
try:
kwargs = dict(placeholders=params, as_dict=True)
if 'no_return' in kw and kw['no_return'] == True:
kwargs['as_dict'] = False
res.rows = current.db.executesql(sql, **kwargs)
except Exception, e:
res.setError(str(e), traceback.format_exc(), current.db._lastsql)
return res
res.success = True
return res
class SQLResponse(object):
def __init__(self, rows=[], success=False, error=None):
self.rows = rows
self.success = success
self.error = dict(msg=None, trace=None)
self.setError(error)
def setError(self, error, rawStack=None, sql=None):
if error == None:
return
self.success = False
self.error['msg'] = error
if rawStack != None:
self.error['trace'] = rawStack.split("\n")
if sql != None:
self.error['sql'] = sql
I did not write this so although it seems pretty strait forward. We use this quite often when we need to do things a little outside of the dal but still get all the honey in the dal. I suspect that since the SQL is declared, you get some marginal bumps at run time as well.