What's done, and what's TODO

2 views
Skip to first unread message

Jonathan LaCour

unread,
Jun 23, 2007, 11:19:11 PM6/23/07
to pyg...@googlegroups.com
Greetings all interested parties! Day one of the Pylons/TurboGears
sprint is over, and thing went swimmingly. In attendance were: Jonathan
LaCour, Mark Ramm, Rick Copeland, Noah Gift, and Mike Schinkel.

Here is what we have managed thus far:

* A new controller called TurboGearsController for Pylons that
implements object-dispatch below any mount point configured in your
regular Pylons Routes configuration.

- Handles "default" methods, although not fully.
- Handles automatic lookup of "index".
- Added a cool "lookup" hook, which should allow overriding of
your
dispatch at a much nicer level than the "default" methods.

* A new expose decorator, that does nothing but register behavior
that is
implemented by the controller itself.

- Supports entire buffet templating API, including TurboJSON
support.
- Supports stacking expose decorators to implement content
negotiation using Accept headers, and tg_format (for now).
Spirited discussion ensued, and we will be adding neat
alternative to tg_format shortly.
- The JSON support is implemented on TurboJSON, including all the
wonderful benefits of a generic jsonify function.

* A new validate decorator that implements the Pylons
htmlfill/formencode validation pattern, rather than the old
TurboGears validation pattern. It works, but needs some love,
documentation, and ToscaWidgets support (we're looking at you
Alberto!!!).


What is coming up next:

* We need to create a new Pylons paster create template:

- Routes configured with a pygears.TurboGearsController as the
default root controller for all URIs
- Transaction middleware, pre-configured for a
transaction-per-request model with auto-rollback and auto-commit,
a'la TurboGears 1.0, but so that you can turn it off or on on a
URI-specfic basis. We're looking at using the paste.transaction
middleware, which is already included with Pylons.
- For an example of an alternate template, see:

http://code.google.com/p/tesla-pylons-elixir/

* A clone of tg-admin on top of paster (started already by Noah)

* Comments and documentation and plenty of unit tests!


We'll be sprinting again tomorrow at Panera Bread at 1:00 PM. They have
free wireless, and are located at this address:

1625 Mt Vernon Rd
Atlanta, GA 30338

If you have any questions, don't hesitate to email me or the list.

Happy sprinting!

--
Jonathan LaCour
http://cleverdevil.org

Mark Ramm

unread,
Jun 23, 2007, 11:52:42 PM6/23/07
to pyg...@googlegroups.com
> What is coming up next:
>
> * We need to create a new Pylons paster create template:
>
> - Routes configured with a pygears.TurboGearsController as the
> default root controller for all URIs
> - Transaction middleware, pre-configured for a
> transaction-per-request model with auto-rollback and auto-commit,
> a'la TurboGears 1.0, but so that you can turn it off or on on a
> URI-specfic basis. We're looking at using the paste.transaction
> middleware, which is already included with Pylons.
> - For an example of an alternate template, see:
>
> http://code.google.com/p/tesla-pylons-elixir/
>
> * A clone of tg-admin on top of paster (started already by Noah)
>
> * Comments and documentation and plenty of unit tests!

I had one more thought for our new expose implementation.... sometimes
you pass widgets into your templates, which you don't want to jsonify.
we could create and optional hidden_fields parameter which contains
a list of keys in the dictionary you don't want passed into the
template.

I can't see any use-case for this outside of json, because usually it
doesn't do any harm to pass extra fields into your template -- but
hidden_fileds should still always be popped out of the returned dict
before it is passed into the template.

--Mark Ramm

Mike Schinkel

unread,
Jun 24, 2007, 2:08:50 AM6/24/07
to pyg...@googlegroups.com
Mark Ramm wrote:
> I had one more thought for our new expose implementation....
> sometimes you pass widgets into your templates, which you
> don't want to jsonify.
> we could create and optional hidden_fields parameter which
> contains a list of keys in the dictionary you don't want
> passed into the template.

"list of keys?"

BTW, it was GREAT working with you four today. I really learned a lot.
Thanks for letting a TG newbie join in.

Also, Mark Ramm is the next DHH! Down with the Ruby-Railians. Hooray for our
fearless leader. Speech! Speech! Speech!

--
-Mike Schinkel
orga...@atlanta-web.org
http://atlanta-web.org
404-276-1276 (cell)
P.S. Mark, j/k :-) :-) :-)

Rick Copeland

unread,
Jun 24, 2007, 8:14:48 AM6/24/07
to pyg...@googlegroups.com
I think he means that:

@expose('json', hidden_fields=['myform1', 'myform2'])
@expose('project.templates.some_template')
def foo(self):
return dict(a=1, b=2, c=3, myform1=some_toscawidgets_form,
myform2=another_form)

would remove 'myform1' and 'myform2' from the dict automatically before
jsonifying it (but passing it to the genshi controller if you accept
text/html).

Is this correct, Mark?

Mark Ramm

unread,
Jun 24, 2007, 8:19:48 AM6/24/07
to pyg...@googlegroups.com
Yea. That's it exactly,


--
Mark Ramm-Christensen
email: mark at compoundthinking dot com
blog: www.compoundthinking.com/blog

Alberto Valverde

unread,
Jun 24, 2007, 11:14:19 AM6/24/07
to pyg...@googlegroups.com

Great stuff guys! I've been browsing the code at Trac and look the
experiment is going very well indeed. Some comments/thoughts:

1) I think that the validate decorator should not be symetric to
expose (in the sense that it only sets attributes) and handle
validation itself (not inside the route method). The reason for this
is that it couples too tightly validation with the controller. The
current implementation, for example, doesn't integrate too well with
TW. Letting a decorator handle validation would allow specialized
validation decorators for TW, plain formencode, etc...

2) Regarding the "hidden_fields" parameter to avoid passing non-data
attributes to the template. I've been thinking some while back about
this and I believe that passing those objects that are not really
data out-of-band would be cleaner. That is, everything dumped in the
dict should be available to all template engines (to avoid the "json"
special case). Any other object that should reach a template could be
passed in pylons.c, or inside the "tg" namespace (like in 1.0) or
imported directly at the template. One reason for this is that other
formats could be handled in the future too with no extra special-case-
handling (xml-rpc, json-rpc, etc...). It'll be interesiting to see
how Kevin approached this in TGWebServices for ideas...

3) I've posted at paste.turbogears.org ([1] and [2]) a module I wrote
for my pylons apps to handle database transactions. It is heavily
based on pylons.database and has a piece of middleware to begin/
rollback-commit transactions based on paste.transaction. It's quite
crude at the moment (docstrings out-of-date, some duplicated code,
etc..) but I could clean it up if needed. Actually, the only
interesting part is the middleware piece since the rest is probably
obsoleted by Mike Orr's new SAContext package. You might find it
useful for ideas.

4) The controllers._configured_engines global should move to pylons.g
(for example) since that will break when two TG2 apps cohabitate the
same process.

I'll be back home tomorrow morning so I can take care of trying out
these ideas (specially 1,3,4) if there are no objections.

Great, great work! :)

Alberto

[1] http://paste.turbogears.org/paste/1436
[2] http://paste.turbogears.org/paste/1437


Mark Ramm

unread,
Jun 24, 2007, 11:23:00 AM6/24/07
to pyg...@googlegroups.com
> 2) Regarding the "hidden_fields" parameter to avoid passing non-data
> attributes to the template. I've been thinking some while back about
> this and I believe that passing those objects that are not really
> data out-of-band would be cleaner. That is, everything dumped in the
> dict should be available to all template engines (to avoid the "json"
> special case). Any other object that should reach a template could be
> passed in pylons.c, or inside the "tg" namespace (like in 1.0) or
> imported directly at the template. One reason for this is that other
> formats could be handled in the future too with no extra special-case-
> handling (xml-rpc, json-rpc, etc...). It'll be interesiting to see
> how Kevin approached this in TGWebServices for ideas...

I don't want to bring in the c and g objects directly, but I'd be more
than willing to proxy them or to import them as context and global
respectively and to use them for passing stuff around more cleanly.

There seemed to be some resistance to using the c object to store
stuff that should be available in the template but not be in the JSON.
But I think that's mostly due to the one letter nature of it than
to the general idea.

But we'll definitely talk more about this this afternoon.

Mark Ramm

unread,
Jun 24, 2007, 11:26:51 AM6/24/07
to pyg...@googlegroups.com
> 3) I've posted at paste.turbogears.org ([1] and [2]) a module I wrote
> for my pylons apps to handle database transactions. It is heavily
> based on pylons.database and has a piece of middleware to begin/
> rollback-commit transactions based on paste.transaction. It's quite
> crude at the moment (docstrings out-of-date, some duplicated code,
> etc..) but I could clean it up if needed. Actually, the only
> interesting part is the middleware piece since the rest is probably
> obsoleted by Mike Orr's new SAContext package. You might find it
> useful for ideas.

Thanks that will help a lot, it certianly gave me some new ideas of
how to handle configuring transactions on a per-method basis.

Mark Ramm

unread,
Jun 24, 2007, 11:38:17 AM6/24/07
to pyg...@googlegroups.com
> Thanks that will help a lot, it certianly gave me some new ideas of
> how to handle configuring transactions on a per-method basis.

Oh, for the sake of making everything easy to find, here is Mike
Orr's new sacontext stuff:

http://groups.google.com/group/pylons-discuss/browse_thread/thread/93030e51bb10270a

and here is Alberto's DB stuff:

http://paste.turbogears.org/paste/1436

And here are the tests.

http://paste.turbogears.org/paste/1437

Alberto Valverde

unread,
Jun 24, 2007, 2:38:46 PM6/24/07
to pyg...@googlegroups.com

On Jun 24, 2007, at 5:14 PM, Alberto Valverde wrote:

>
> I'll be back home tomorrow morning so I can take care of trying out
> these ideas (specially 1,3,4) if there are no objections.


Update: I finally managed to sneak a couple of hours from my gf at a
starbucks and comitted an alternative validator with tests... :)

Some notes:

1) Handles both positional and kw args (thanks to ported signature-
mangling-magic from turbogears.util by Simon Belak ;)
2) The decorator returns a wrapped function with the same signature
(essential so pylons controller can properly dispatch to it). I
haven't tested, but it should preserve annortations set by expose too
3) uses the tg_errors semantics of TG 1.0 but without the complex
error handling at the decorator level
4) Handles Schemas, validator dicts or TW (or TG widgets, though
those could hardly work without CP) forms

Looks as a good approach to replace current "validate"?

Alberto

Rick Copeland

unread,
Jun 24, 2007, 2:55:51 PM6/24/07
to pyg...@googlegroups.com
I don't actually like the idea of using a wrapper if we can avoid it at
all for a couple of reasons:

* Error handling doesn't work if you "raise Invalid" outside of the
@validate wrapper (it's just an unhandled exception)
** Particularly, you can't "raise Invalid" during object dispatch and
have it work right
* You can't introspect the validation requirements (easily) -- this is
important for a couple of reasons
** Automated URL documentation (useful in large teams where some
HTML/CSS/JS people don't really "get" Python yet)
** I can imagine situations where you could write a controller that uses
AJAX to "check" a request for validity in order to configure the UI
(hiding invalid options, etc.) -- this would be particularly useful
working with chained_validators

The existing validator decorator also handles schemas, validator dicts,
or widgets, btw. I'd be all for using something closer to tg_errors to
*handle* validation errors, however.

Just my $.02

-Rick

Alberto Valverde

unread,
Jun 24, 2007, 3:03:50 PM6/24/07
to pyg...@googlegroups.com

On Jun 24, 2007, at 8:55 PM, Rick Copeland wrote:

>
> I don't actually like the idea of using a wrapper if we can avoid
> it at
> all for a couple of reasons:
>
> * Error handling doesn't work if you "raise Invalid" outside of the
> @validate wrapper (it's just an unhandled exception)

This is true. However....

> ** Particularly, you can't "raise Invalid" during object dispatch and
> have it work right

... I don't think object dispatch should handle validation. They are
clearly different responsabilities IMO.

> * You can't introspect the validation requirements (easily) -- this is
> important for a couple of reasons

> ** Automated URL documentation (useful in large teams where some
> HTML/CSS/JS people don't really "get" Python yet)
> ** I can imagine situations where you could write a controller that
> uses
> AJAX to "check" a request for validity in order to configure the UI
> (hiding invalid options, etc.) -- this would be particularly useful
> working with chained_validators

This can be easily added with the wrapper. Just attach the schema to
func. where it can easily be accessed by introspection tools.


>
> The existing validator decorator also handles schemas, validator
> dicts,
> or widgets, btw. I'd be all for using something closer to
> tg_errors to
> *handle* validation errors, however.

The problem i see is that I don't think it's the dipatching method's
responsability to validate parameters. The main advantage I see to
using a wrapper is that other wrappers could be used to handle
validation in a different way and have the method's input validated
regardless if it's being called by the dispatcher or by other parts
of the code.

Alberto

Rick Copeland

unread,
Jun 24, 2007, 3:26:32 PM6/24/07
to pyg...@googlegroups.com
Alberto Valverde wrote:
> ...

>
> ... I don't think object dispatch should handle validation. They are
> clearly different responsabilities IMO.
>
The problem is when you have a URL like
/client/42/project/16/task/22/update (which is now allowed by the
"lookup" hook) . I'd like to have the following controllers:

ClientList, Client, ProjectList, Project, TaskList, Task

ClientList, ProjectList, and TaskList would implement a "lookup" method
which returns a Client, Project, and Task controller, respectively,
based on the ID passed to "lookup". I'd like to then have validators
that check 1) whether the ID exists in the DB, and 2) whether the
current user can view/update/etc. that ID. In this case, I think that
validation and object dispatch are pretty well intertwined.

Also, in the current implementation, I can't do validation inside my
controller method (I know that the wrapper can be extended to handle
this, I just think it's cleaner to do it outside of a wrapper.)


>> * You can't introspect the validation requirements (easily) -- this is
>> important for a couple of reasons
>>
>> ** Automated URL documentation (useful in large teams where some
>> HTML/CSS/JS people don't really "get" Python yet)
>> ** I can imagine situations where you could write a controller that
>> uses
>> AJAX to "check" a request for validity in order to configure the UI
>> (hiding invalid options, etc.) -- this would be particularly useful
>> working with chained_validators
>>
>
> This can be easily added with the wrapper. Just attach the schema to
> func. where it can easily be accessed by introspection tools.
>

True, but in general, wrapping functions makes introspection difficult.
I think it just makes things more inflexible to wrap functions (it
pretty much makes it impossible to change where validation happens in
the future, for instance.)


>> The existing validator decorator also handles schemas, validator
>> dicts,
>> or widgets, btw. I'd be all for using something closer to
>> tg_errors to
>> *handle* validation errors, however.
>>
>
> The problem i see is that I don't think it's the dipatching method's
> responsability to validate parameters. The main advantage I see to
> using a wrapper is that other wrappers could be used to handle
> validation in a different way and have the method's input validated
> regardless if it's being called by the dispatcher or by other parts
> of the code
>

If you want to write a new wrapper to handle validation in a different
way, then that's still possible. Just raise an Invalid exception on
invalid input.

Now, as to validating calls from other parts of the code, I *don't*
think that validation should apply. Validation is for validating user
input. Anything coming from another controller should already be Python
objects, not strings.

-Rick

Kevin Dangoor

unread,
Jun 24, 2007, 3:38:24 PM6/24/07
to pyg...@googlegroups.com
On Jun 24, 2007, at 2:38 PM, Alberto Valverde wrote:

> Update: I finally managed to sneak a couple of hours from my gf at a
> starbucks and comitted an alternative validator with tests... :)
>
> Some notes:
>
> 1) Handles both positional and kw args (thanks to ported signature-
> mangling-magic from turbogears.util by Simon Belak ;)
> 2) The decorator returns a wrapped function with the same signature
> (essential so pylons controller can properly dispatch to it). I
> haven't tested, but it should preserve annortations set by expose too
> 3) uses the tg_errors semantics of TG 1.0 but without the complex
> error handling at the decorator level
> 4) Handles Schemas, validator dicts or TW (or TG widgets, though
> those could hardly work without CP) forms

I didn't get a chance to respond to this earlier, but I figured I
should do so now since there's hacking going on :)

I think it would be better to not use true decorators. I just
generally don't like the machinations that are required to try to
preserve what the original function looked like (even if those
machinations are hidden away in a library). I'm also not a fan of how
decorators are handled when subclassing. I like the style of adding
behavior via attributes.

A better solution than true decorators, IMHO, is a pluggable
mechanism for adding behavior when the method is called. Nothing too
fancy... It could be a list of callbacks that is set on the function.
(So, certain of the new-style decorators could add a callback to the
callback list and then have that callback called with the function
and other info at dispatch time.)

That way, the function retains its entire original appearance *and*
can be easily introspected for these other behaviors.

Kevin

Rick Copeland

unread,
Jun 24, 2007, 3:38:42 PM6/24/07
to pyg...@googlegroups.com
Sorry I missed this -- one other reason I don't like wrappers:

When using wrappers, stacking order matters. This is very fertile
ground for subtle errors that I'd like to stay away from.

Alberto Valverde

unread,
Jun 24, 2007, 3:47:05 PM6/24/07
to pyg...@googlegroups.com

Gotta run now, so I'll be quick or get yelled at :)

I'm beginning to see the points against wrapping (both from you and
Rick) and mostly agree...

The pluggable mechanism looks as a nice way to avoid having a
wrapper. What concenrs me most is to have this pre/post processing
hardcoded in the "routes" method. Maybe a generic function that could
inspect the func and request (just a quick thought....)

Be back on the thread tomorrow...

Alberto

Kevin Dangoor

unread,
Jun 24, 2007, 4:12:22 PM6/24/07
to pyg...@googlegroups.com
On Jun 24, 2007, at 3:47 PM, Alberto Valverde wrote:

> The pluggable mechanism looks as a nice way to avoid having a
> wrapper. What concenrs me most is to have this pre/post processing
> hardcoded in the "routes" method. Maybe a generic function that could
> inspect the func and request (just a quick thought....)

On IRC yesterday, I mentioned that this behavior really needs to be
exposed in a function that the user can call... you need this to be
able to do this kind of behavior:

@expose(...)
def foo(self):
return tg2.call(self.some_other_thing)

@expose(...)
def some_other_thing(self):
...do something...

Kevin

Jonathan LaCour

unread,
Jun 24, 2007, 4:52:29 PM6/24/07
to pyg...@googlegroups.com
Alberto Valverde wrote:

> Great stuff guys! I've been browsing the code at Trac and look the
> experiment is going very well indeed. Some comments/thoughts:

Thanks!

> 3) I've posted at paste.turbogears.org ([1] and [2]) a module I
> wrote for my pylons apps to handle database transactions. It is
> heavily based on pylons.database and has a piece of middleware to
> begin/ rollback-commit transactions based on paste.transaction. It's
> quite crude at the moment (docstrings out-of-date, some duplicated
> code, etc..) but I could clean it up if needed. Actually, the only
> interesting part is the middleware piece since the rest is probably
> obsoleted by Mike Orr's new SAContext package. You might find it
> useful for ideas.

So, I have been working on what to do for database handling in TG2, and
I am at a total loss. I looked at your code, and since it is based
heavily on Pylons' current code, I am not sure I like it. I have a
common use case that causes me headaches in TurboGears where I want to
share a model between a web application and a command-line process.
Currently, this is a big pain because you have to import turbogears into
your command line application just to get access to the metadata.

The code that you have written does the same thing. I think this is a
big mistake. At some level, all that TurbOGears 2.0 needs to be able to
do is:

1. Connect your application's metadata to the configured URI.
2. Start, commit, and rollback transactions based upon exceptions.

Part of me thinks that this should mostly be handled in a separate egg
from TurboGears 2.0 entirely, that happily works from command-line
applications. It might look something like this:

from sqlalchemy.ext.sessioncontext import SessionContext
from sqlalchemy import MetaData, create_engine, create_session

metadatas = dict()
context = SessionContext(create_session)

def get_metadata(application):
global metadatas
return metadatas.setdefault(application, MetaData())

def connect(application, dburi, stacked_object_proxies=True):
engine = create_engine(dburi, strategy='threadlocal')
get_metadata(application).connect(engine)

def start(application):
get_metadata(application).engine.begin()

def commit(application):
context.current.flush()
get_metadata(application).engine.commit()

def rollback(application):
get_metadata(application).engine.rollback()

This would allow for multiple applications in the same process, and we
could create some WSGI middleware that used these functions to manage
the autocommit/rollback behavior based upon your configuration.

Another complicating factor is that I think Pylons will likely switch to
using Mike Orr's SAContext project at some point. Although, I dislike
its interface very much since it uses BoundMetaData objects, forcing you
to know your dburi up front, and tying your model to your configuration
system at some level.

Sorry for the rambling, but I have been scratching my head over this for
about an hour, and am not making any progress, so I am reaching out for
any sort of ideas on how to move forward!

Alberto Valverde

unread,
Jun 26, 2007, 6:08:37 AM6/26/07
to pyg...@googlegroups.com

On Jun 24, 2007, at 10:52 PM, Jonathan LaCour wrote:

>
> Alberto Valverde wrote:
>
>> Great stuff guys! I've been browsing the code at Trac and look the
>> experiment is going very well indeed. Some comments/thoughts:
>
> Thanks!
>
>> 3) I've posted at paste.turbogears.org ([1] and [2]) a module I
>> wrote for my pylons apps to handle database transactions. It is
>> heavily based on pylons.database and has a piece of middleware to
>> begin/ rollback-commit transactions based on paste.transaction. It's
>> quite crude at the moment (docstrings out-of-date, some duplicated
>> code, etc..) but I could clean it up if needed. Actually, the only
>> interesting part is the middleware piece since the rest is probably
>> obsoleted by Mike Orr's new SAContext package. You might find it
>> useful for ideas.
>
> So, I have been working on what to do for database handling in TG2,
> and
> I am at a total loss. I looked at your code, and since it is based
> heavily on Pylons' current code, I am not sure I like it. I have a
> common use case that causes me headaches in TurboGears where I want to
> share a model between a web application and a command-line process.
> Currently, this is a big pain because you have to import turbogears
> into
> your command line application just to get access to the metadata.

I've been doing this for ages with that module! :) The metadata it
uses is non-bound, all you need to do to use in a CLI scripts is
import that module (which in a separate egg) and initialize a session
passing the db's uri to the make_session constructor. This session
can be assigned to ctx.current for assign_mapped classes to use.

Anyway, the implementation is rather clumsy (since it has _engines at
a global level, for example) and lacks key functionallity (passing
other options than "echo" and "uri" to the engine's constructor) so
I'm not defending it. Just threw it in for ideas since it's working
code that is serving me well so far (I even mentioned that the only
interesting part is the middleware ;).

Another idea worth mentioning is the "transaction" StackedOB since
it's what I use when I need control of the current transaction inside
controller methods


>
> The code that you have written does the same thing. I think this is a
> big mistake. At some level, all that TurbOGears 2.0 needs to be
> able to
> do is:
>
> 1. Connect your application's metadata to the configured URI.
> 2. Start, commit, and rollback transactions based upon
> exceptions.
>
> Part of me thinks that this should mostly be handled in a separate egg
> from TurboGears 2.0 entirely, that happily works from command-line
> applications. It might look something like this:

+1 for separate egg which only dependend on SA

>
> (...)


>
> Another complicating factor is that I think Pylons will likely
> switch to
> using Mike Orr's SAContext project at some point. Although, I dislike
> its interface very much since it uses BoundMetaData objects,
> forcing you
> to know your dburi up front, and tying your model to your
> configuration
> system at some level.

I haven't studied SAContext very closely yet but it looked to me that
you could change URI's on the fly... As far as I understood it you
could do something like this:

meta = sac.get_meta(key="foo")

table - Table('atable', meta, ....)

and then later in the code

session = sac.connect(uri, key="foo")


(syntax most probably way off!)

This doesn't force you to have the metadata connected before the
engine is configured, right?

Anyway, whatever we come up with, I'd favor a solution non-tg
specific that can be used in a plain WSGI app or plain script
(without the transaction middleware, I mean)

>
> Sorry for the rambling, but I have been scratching my head over
> this for
> about an hour, and am not making any progress, so I am reaching out
> for
> any sort of ideas on how to move forward!

No problem! :) Constructive rambling is appreciated

Alberto

Ben Bangert

unread,
Jun 26, 2007, 4:18:03 PM6/26/07
to pylonsturbogears_sprint
On Jun 24, 1:52 pm, Jonathan LaCour <jonathan-li...@cleverdevil.org>
wrote:

> So, I have been working on what to do for database handling in TG2, and
> I am at a total loss. I looked at your code, and since it is based
> heavily on Pylons' current code, I am not sure I like it. I have a
> common use case that causes me headaches in TurboGears where I want to
> share a model between a web application and a command-line process.
> Currently, this is a big pain because you have to import turbogears into
> your command line application just to get access to the metadata.

Anytime you're dealing with loading something that needs config
options, the config needs to be loaded first. The current Pylons
scheme does not require the entire WSGI app to be loaded, if you just
load the config and setup the pylons.config object.

I see no reason TG2 would differ on this, nor what the big deal is
about loading this up, especially since we could package it up nicer
with something like:
import pylons
pylons.config.load_ini('/path/to/config')

And then you can proceed to use the models. Is that so bad?

> Another complicating factor is that I think Pylons will likely switch to
> using Mike Orr's SAContext project at some point. Although, I dislike
> its interface very much since it uses BoundMetaData objects, forcing you
> to know your dburi up front, and tying your model to your configuration
> system at some level.

If you don't use BoundMetaData, the only other way to get a model to
work with autoload is to connect the engine before the table
declarations are made. I'd consider it exceptionally bad to write-off
everyone who wants to use autoload, so having BoundMetaData keeps
things sane with and without autoload and means people using it don't
have to jump through more hoops to get it working.

What's so evil about knowing the config before the models are made?
It's already a requirement for autoload, so without it you do remove
the ability to use autoload.

If SQLAlchemy in the future doesn't require the engine to be connected
before using autoload, that could change things, but until then I'd
consider the SAContext to be the best way to use SQLAlchemy as Mike
Bayer also worked on it.

Would the pylons.config.load_ini command take care of your
reservations?

Cheers,
Ben

Ben Bangert

unread,
Jun 26, 2007, 4:22:14 PM6/26/07
to pylonsturbogears_sprint
On Jun 26, 3:08 am, Alberto Valverde <albe...@toscat.net> wrote:

> +1 for separate egg which only dependend on SA

If its small, couldn't it be a SA extension? While I don't mind
packages, I do like to keep the amount of them down to a minimum. If
this is a small extension, it seems ideal to have it in SA as such.

> I haven't studied SAContext very closely yet but it looked to me that
> you could change URI's on the fly... As far as I understood it you
> could do something like this:
>
> meta = sac.get_meta(key="foo")
>
> table - Table('atable', meta, ....)
>
> and then later in the code
>
> session = sac.connect(uri, key="foo")
>
> (syntax most probably way off!)
>
> This doesn't force you to have the metadata connected before the
> engine is configured, right?

I believe it does, and it can handle having multiple engines for some
objects (prolly the ones not using autoload). Again, as I mentioned in
reply to Jonathon's post, we should always assume people will want to
use autoload so having solutions that rule that out are not good.

Cheers,
Ben

Jonathan LaCour

unread,
Jun 26, 2007, 4:51:02 PM6/26/07
to pyg...@googlegroups.com
Ben Bangert wrote:

> Anytime you're dealing with loading something that needs config
> options, the config needs to be loaded first.

Sure, thats not my problem at all. I just don't like the way that
triggering it on import causes me to have to tie into my configuration
system, unless I want to jump through hoops.

I have circumstances where I need to share a model between some non-WSGI
processes, mostly daemons for doing a variety of things based upon
watching directories, or command-line tools. Many times, these tools
will be distributed on a different system than my web app, using a
different config file and config system that my web framework.

Why should I have to install pylons and my pylons configuration onto a
box just to use the command line tools?

> If you don't use BoundMetaData, the only other way to get a model
> to work with autoload is to connect the engine before the table
> declarations are made. I'd consider it exceptionally bad to write-off
> everyone who wants to use autoload, so having BoundMetaData keeps
> things sane with and without autoload and means people using it don't
> have to jump through more hoops to get it working.

I understand this complaint 100%. My issue is that I don't use autoload
and this isn't really my problem :) However, it is a case that obviously
needs to be handled.

> What's so evil about knowing the config before the models are made?

As I said above, it ties your model *directly* to your configuration
system and configuration file. I should be able to plug in whatever
configuration system I want to and share my model across disparate
systems without having to install a web framework.

> If SQLAlchemy in the future doesn't require the engine to be connected
> before using autoload, that could change things, but until then I'd
> consider the SAContext to be the best way to use SQLAlchemy as Mike
> Bayer also worked on it.

I really want to like SAContext, because I share its goals. I think
I can probably get around it though, by creating a third-party module
responsible for holding and retrieving my DBURI:

from sacontext import SAContext
from dbconfig import get_dburi

sac = SAContext(uri=get_dburi())

my_table = Table("my", sac.metadata, Column(...))

Then, in my applications, I can make sure to do this in the startup
script sometime before the model is imported:

from dbconfig import set_dburi
...
set_dburi(my_configuration_system.get('sqlalchemy.dburi'))

I just hate having to jump through hoops when I can't think of a good
reason for having to. It seems like SAContext should just behave this
way from the get go, so I can do this instead in my model.py:

from sacontext import SAContext

sac = SAContext()

my_table = Table("my", sac.metadata, Column(...))

and then my web framework or command line tool could just do this step
for me before the model.py is imported:

import sacontext
sacontext.set_dburi(get_uri_from_my_config())

But, this may be too much to ask.

> Would the pylons.config.load_ini command take care of your
> reservations?

Nope. I shouldn't have to import pylons into my command-line tools, or
ship my pylons configuration files with my command line tools or desktop
applications that share my models.

Jonathan LaCour

unread,
Jun 26, 2007, 4:58:44 PM6/26/07
to pyg...@googlegroups.com
Ben Bangert wrote:

>> +1 for separate egg which only dependend on SA
>
> If its small, couldn't it be a SA extension? While I don't mind
> packages, I do like to keep the amount of them down to a minimum. If
> this is a small extension, it seems ideal to have it in SA as such.

I think we should just enhance SAContext, since it has Mike Bayer's
blessing, and eventually it'll get put into SQLAlchemy.

>> I haven't studied SAContext very closely yet but it looked to me
>> that you could change URI's on the fly... As far as I understood it
>> you could do something like this:
>

> [snip, snip]


>
> I believe it does, and it can handle having multiple engines for some
> objects (prolly the ones not using autoload). Again, as I mentioned in
> reply to Jonathon's post, we should always assume people will want to
> use autoload so having solutions that rule that out are not good.

Cool, well, it seems that SAContext already has some sort of notion for
doing what I need it to. I just want to make sure that I can store my
models in some totally independent package that can be shared amongst my
applications that are in a "suite" and just do:

from sharedmodel import *

in the models/__init__.py of my pylons app.

Reply all
Reply to author
Forward
0 new messages