Testing your app using py.test

692 views
Skip to first unread message

Vinicius Assef

unread,
Apr 3, 2013, 8:14:13 PM4/3/13
to web2py
Hi guys.

I just published an example application with runnable test cases [1].

I don't like doctests, so I used py.test due to some reasons:
- I don't need to subclass anything to make my test cases.
- py.test understands unittest and nose tests.
- py.test fixtures schema is (really) very flexible, allowing me to
inject external dependencies. That's the case with Web2py's env().
- I can make assertions using simple Python, with assert statement.

This way implements a mix between WebClient and env(), allowing test
cases to execute a controller/function() and immediately check with
DAL commands if the action was properly executed (i.e, database
updated).

More details are in docs and inside code.

Feel free to fork, collaborate and use it. But give us feedback.

[1] https://github.com/viniciusban/web2py.test

--
Vinicius Assef

Massimo Di Pierro

unread,
Apr 3, 2013, 9:43:45 PM4/3/13
to web...@googlegroups.com
+1

Arnon Marcus

unread,
May 16, 2013, 4:31:27 AM5/16/13
to web...@googlegroups.com
This sound great, I am currently researching testing-options for web2py, and py.test is one of my leading candidates.
I haven't read passes the readme yet, but I plan to - it's a good news to see that there is something being actively developed/maintained and up-to-date for doing this.

I have some questions/comments though:

I've been watching many lectures on testing in python lately (mainly from PyConfs), and there is a lot of discussion on what should be unit-tested, and what should be considered higher-level (functional/system/integration). For example, it is generally considered bad-practice to thing of controller-action tests as 'unit-tests', because there rely so heavily on so many external-dependencies and environment-state. Therefore, the suggested approach is to keep controller-actions (called 'views' in Django-land) as small as possible, and externalize all logic into a separate modules, that do not require the general-environment as dependencies. Controller-actions are best though-of as being better-suited for integration-tests, which should be fewer, and much far-between, when compared to unit-tests. In addition, the suggestion is to not mock-out the templates, and actually test their logic - meaning, to use the templating-engine of your framework in conjunction with your actual template-code, to gain optimal test-coverage. The assertions should be on the output of the template-renderings, and something like WebTest for mocking-out the request-objects, and asset-over the returned response-objects.

There is also a discussion about database-access - generally, you shouldn't do it - for performance-reasons - you should mock the models-layer. But then there is also a general suggestion against mocking the persistence-layer, as it is generally a very large and complex layer, with very broadly-defined boundaries. So the suggested approach, is to mock-over the 'model-using' code (the dal-using code). Meaning, using something like Michael-Foord's 'mock' library for monkey-patching the code that uses the db object.

Another discussion revolves around decoupling, and strives for usage of dependency-injections. This is especially relevant for the above-mentioned data-layer mocking. It is suggested that the code would be structured in a way that would allow unit-testing the business-logic code without using actual database-accesses, through dependency-injection approaches. what I read this to mean, is that you should have your business-logic code in a separate module, that receives a db object from the outside. This way, you could provide a mocked/monkey-patches version of the db object, when unit-testing. This fake-db should be constructed in the setup-area of your testing code.

What are your thoughts on these suggestions?

Niphlod

unread,
May 16, 2013, 4:37:59 AM5/16/13
to web...@googlegroups.com
the time taken to write something that mocks a database is orders of magnitude more than just testing on a separate database instance.
As for "decoupling", I'm a big believer that webapp tests should need to be executed with behavioural tests, not unit-tests.
Of course business logic core modules (if any) can be tested with unittest, but that's actually "outside" the scope of web2py..... they're just standard python modules if you kept them just "business logic" and they don't interfere with presentation logic.

Arnon Marcus

unread,
May 16, 2013, 7:03:33 AM5/16/13
to web...@googlegroups.com

On Thursday, May 16, 2013 11:37:59 AM UTC+3, Niphlod wrote:
the time taken to write something that mocks a database is orders of magnitude more than just testing on a separate database instance.
 
And the execution-time of tests that use a database is orders-of-magnitude longer than tests that don't - it's not as trivial-a-decision as it seems.
But as I said, the suggestion is not to mock-out the entire persistence-layer, that is actually considered a bad idea, because of the reasons we both acknowledge.
So the optimal solution should be neither. It should be something 'clever' in-between these two extremes.
For example, the 'mock' library provides an api-free approach, that enables you to monkey-patch/override a highly-complex set of recursive/procedural-calls.
For instance, consider a highly-complex query, such as:

db(db.Budget_BD_Resources.BudgetBD==bd).select(db.Budget_BD_Resources.Amount.sum(),groupby=db.Budget_BD_Resources.BudgetBD)

You could theoretically monkey-patch this whole statement in it's entirety (or any part of it), so it returns whatever you want (say, a pre-made 'rows' instance with 'row'-instances inside)
The 'mock' library should allow you to do that, wile constraining the monkey-patch to a temporary execution-context, and tear-it-off automatically at the end of the test (either with a decorator, or a context-manager that is provided in the library)
 
 
As for "decoupling", I'm a big believer that webapp tests should need to be executed with behavioural tests, not unit-tests.

Yup! :) 


Of course business logic core modules (if any) can be tested with unittest, but that's actually "outside" the scope of web2py..... they're just standard python modules if you kept them just "business logic" and they don't interfere with presentation logic.

Well, that assumes you can completely-decouple the business-logic from code that uses the db-object - that's not a trivial-decoupling to accomplish.
Decoupling it from presentation-logic is more trivial usually, but that's only half of the story. 

Niphlod

unread,
May 16, 2013, 7:12:21 AM5/16/13
to web...@googlegroups.com


Il giorno giovedì 16 maggio 2013 13:03:33 UTC+2, Arnon Marcus ha scritto:

On Thursday, May 16, 2013 11:37:59 AM UTC+3, Niphlod wrote:
the time taken to write something that mocks a database is orders of magnitude more than just testing on a separate database instance.
 
And the execution-time of tests that use a database is orders-of-magnitude longer than tests that don't - it's not as trivial-a-decision as it seems.

uhm. How many tests do you want to run in a day ? let's say a test run takes 20 minutes. You can spin 72 jobs a day. Need more ? Spin another VM.... days where processing power was limited are gone for good.
 
But as I said, the suggestion is not to mock-out the entire persistence-layer, that is actually considered a bad idea, because of the reasons we both acknowledge.
So the optimal solution should be neither. It should be something 'clever' in-between these two extremes.
For example, the 'mock' library provides an api-free approach, that enables you to monkey-patch/override a highly-complex set of recursive/procedural-calls.
For instance, consider a highly-complex query, such as:

db(db.Budget_BD_Resources.BudgetBD==bd).select(db.Budget_BD_Resources.Amount.sum(),groupby=db.Budget_BD_Resources.BudgetBD)


uhm2. This is just asking for nightmares. If you want speed (don't want to involve a database) you test "transformations", not the fetch-transform couple.

 
You could theoretically monkey-patch this whole statement in it's entirety (or any part of it), so it returns whatever you want (say, a pre-made 'rows' instance with 'row'-instances inside)
The 'mock' library should allow you to do that, wile constraining the monkey-patch to a temporary execution-context, and tear-it-off automatically at the end of the test (either with a decorator, or a context-manager that is provided in the library)
 

Good luck :D
 

Arnon Marcus

unread,
May 16, 2013, 8:18:13 AM5/16/13
to web...@googlegroups.com

uhm. How many tests do you want to run in a day ? let's say a test run takes 20 minutes. You can spin 72 jobs a day. Need more ? Spin another VM.... days where processing power was limited are gone for good.
 

Are we talking unit-tests or integration-tests?
For unit-tests, people doing TDD want interactive-performance. They configure watchers on their files, so the tests run locally each time they save them.
It means, their entire test-suite of unit-tests should run in a second-or-two, preferably less.
How many times do they expect to run them in a day?
I'm not sure that's even a relevant question to ask... (At least not for unit-tests... If it is, it shouldn't be...) As many times as they save their files...
Web-apps, specifically, are usually very light on the compute-side - they are mainly I/O-bound, not CPU-bound.
I have seen several lectures showing live testing-sessions that run hundreds of tests in less than a second.
You can not do this if your unit-tests are using a database.
The minutes-scale of tests, should be integration-tests, not unit-tests. These are usually ran somewhere-around 5 times a day per-developer, according to what I've seen.

uhm2. This is just asking for nightmares. If you want speed (don't want to involve a database) you test "transformations", not the fetch-transform couple.

Sure, but you want to use your existing business-logic  core, which usually contains db-object-usage...
It's not a nightmare to monkey-patch these objects/calls like that, it's actually a pretty common pattern.
 
You could theoretically monkey-patch this whole statement in it's entirety (or any part of it), so it returns whatever you want (say, a pre-made 'rows' instance with 'row'-instances inside)
The 'mock' library should allow you to do that, wile constraining the monkey-patch to a temporary execution-context, and tear-it-off automatically at the end of the test (either with a decorator, or a context-manager that is provided in the library)
 

Good luck :D

10x :)

Vinicius Assef

unread,
May 16, 2013, 10:06:26 AM5/16/13
to web2py
On Thu, May 16, 2013 at 9:18 AM, Arnon Marcus <a.m.m...@gmail.com> wrote:
>
>
> For unit-tests, people doing TDD want interactive-performance. They
> configure watchers on their files, so the tests run locally each time they
> save them.
> It means, their entire test-suite of unit-tests should run in a
> second-or-two, preferably less.

I like this subject a lot. It's not about tests, actually. It's about
architecture. ;-)

You're absolutely right, Arnon!

In a large system, though, it is difficult to run all the test suite
in a few seconds. To address this problem, you can run just tests that
cover what you changed. This is the unit test approach: isolation,
right? Conceptually, a unit test will test some atomic code, i.e., a
piece of software in isolation.

If your architecture is good, you'll achieve that.

To address it, try to put your code to be unit tested in modules. So,
it's decoupled from your "delivery mechanism", what the web really is.


> How many times do they expect to run them in a day?

Hundreds. Thousands!
It must be multiplied by the number of developers your project has.


> I have seen several lectures showing live testing-sessions that run hundreds
> of tests in less than a second.
> You can not do this if your unit-tests are using a database.
> The minutes-scale of tests, should be integration-tests, not unit-tests.

I've been studying Uncle Bob's Clean Architecture approach [1].
Graphically, it divides your system in circles, to help control what
has access to what. It's a nice approach.

He counts on dependency injection, but it's not a simple way do
develop software.

Every performance problem when running a unit test can be considered a
"smell". Usually you have a coupled design.

But I understand we could follow the Zen of Python, claiming more for
practcality than for purity.

I see that developing a real decoupled application doesn't help in
most cases. Mainly in the small apps world.

[1] http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html


> These are usually ran somewhere-around 5 times a day per-developer,
> according to what I've seen.

It's not the TDD approach, though.
If Arnon is talking about TDD, it's not useful to limit how many times
a test suit can run. Ultimately, all tests will not be run, due to
slowness.

>>
>> uhm2. This is just asking for nightmares. If you want speed (don't want to
>> involve a database) you test "transformations", not the fetch-transform
>> couple.

Yes, but there's a light in the path.

There are 2 performance problems with integration tests:
a) the database
b) the application server

With web2py.test [2] I try to address both.

The DB part, I mimic what Django does to speed up tests in need of a
database: create a in memory database.

The application server part, I clone the Web2py environment,
simulating a shell environment. You have db, templates, but you don't
have a real request. Actually, you don't even need to have the server
running (web2py, or apache, or nginx).

Yes, there are drawbacks in both scenarios. But it exists in any test
envoronment. Django's testclient is more robust and give us out of the
box some tools to work on tests, but I think Web2py can evolve in this
area, to have a better acceptance to a test based development
workflow.


[2] http://github.com/viniciusban/web2py.test


>>
> Sure, but you want to use your existing business-logic core, which usually
> contains db-object-usage...
> It's not a nightmare to monkey-patch these objects/calls like that, it's
> actually a pretty common pattern.

You even don't need to monkey patch anything. You need only a
integration test. It's simpler.

>>>
>>> You could theoretically monkey-patch this whole statement in it's
>>> entirety (or any part of it), so it returns whatever you want (say, a
>>> pre-made 'rows' instance with 'row'-instances inside)
>>> The 'mock' library should allow you to do that, wile constraining the
>>> monkey-patch to a temporary execution-context, and tear-it-off automatically
>>> at the end of the test (either with a decorator, or a context-manager that
>>> is provided in the library)

No. You just need a in-memory database. ;-)

Arnon Marcus

unread,
May 16, 2013, 11:43:10 AM5/16/13
to web...@googlegroups.com
I'm not sure how you are goint to implement an in-memory relational-database that can be used woth the same db-object-using code - that sounds ineresting...

But this has a smell of having your unit-test testing the framework, more than your code, the same problem that exists in django.

Ideally this should be avoided...
And it can - with mock-objects that substute the db object, emulating it's api-calls that your code is using. It's eazier than you might think, using the 'mock' library - check out the link I posted.
It would also be even faster than a memory-database, and much simpler.
The only downside is that anytime your query-code changes, you would have to update your tests...

Vinicius Assef

unread,
May 16, 2013, 1:09:25 PM5/16/13
to web2py
Hi Arnon.

The idea to have an in-memory db is to test a web app the same way Django does.

So, it's up to the developer to choose what must be coded in which
function (or method, or routine). If he/she wants to mix all sort of
rules and accesses, it's possible. If not, it's too. In this case, due
to simple approach Web2py has relating to data access, I think it's
not necessary to worry about this kind of abstraction. In my personal
experience, it has more cons than pros, but I don't own the trueth.
;-)

I agree it is an architecture smell. But, again, a fully decoupled
system is not a simple thing to achieve. Personally, I'd rather have
some pieces coupled to a data access layer with tests than a
complicated way to decouple things and be purist at extreme.

I didn't finish my studies in this area. I'm making some experiments
with all these things.
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "web2py-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to web2py+un...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Anthony

unread,
May 16, 2013, 1:37:09 PM5/16/13
to web...@googlegroups.com
I'm not sure how you are goint to implement an in-memory relational-database that can be used woth the same db-object-using code - that sounds ineresting...

db = DAL('sqlite:memory')

Arnon Marcus

unread,
May 16, 2013, 2:41:26 PM5/16/13
to web...@googlegroups.com
+1 !
:)

Arnon Marcus

unread,
May 16, 2013, 2:57:48 PM5/16/13
to web...@googlegroups.com
Well, as an aside, I am planning to use RAMDISK for my new production-server, as a read-only database.
The approach is to use PosgreSQL's Master/Standby Streaming-Replication features.
It goes like this:
You have 2 instances of PostgreSQL:
1. Master : For write-only, with a data-directory sitting on our SAN storage, mounted via NFS,
2. Standby : For read-only, with a data-directory sitting on a RAMDISK mount-point.

In addition, I plan to use PG-POOL, a PostgreSQL-specific tool, as a font-proxy to these 2 servers.
The servers themselvs can be anywhere - preferably 2 separate-machines, or 2 VMs.
PG-POOL would act as a proxy, and also as a connection-pool. It would analyze the queries, and automatically re-rout all reads to the RAMDISK-Standby, and all writes to the Master. It also takes care of managing the streaming-replication that is configured in PostgreSQL for replicating all of the writes done to the master, onto the read-only RAMDISK-Standby. The replication can be configured to be synchronousely or asynchrounousely. In a synchronouse configuration, a "write-transaction-commit" to the Master, would not be considered successfull, before a successful-replication of that transaction has been sent to the Standby-replica - meaning, only after a commit of the write-transaction has also been successfully recieved in the read-only replica (REAMDISK Standby), would the write-transaction on the master be considered complete and successful. This is the safest, albit-slowest approach. But it is only slow on writes, which are much less common on our web-app, so I can live with that.

I thought of using this approach to build a third replica, that is also writable, that only accepts schema-changes...

But I guess an SQLite option would do just fine.

I also like Anthony's option, I think it might even be faster that what you are currently doing in this experiment...
Anyways, interesting work here, I'll continue looking into it...

Arnon Marcus

unread,
May 17, 2013, 4:23:39 AM5/17/13
to web...@googlegroups.com
I have some more questions about using an alternative database for testing.

What would happen to the schema-log file?
Wouldn't having the same model-code using 2 different databases, mess up the log and brake automatic-migration capability?
I mean, you could turn migration off for when testing, but wouln't that prevent the model from being able to actually create your tables?
Or does web2py manage different logs for each connection-url?
And then there is the issue of having differences between schema definitions using different databases, for example the way booleans are implemented, or having missing features in the dal, like supporting multy-column unique-constrains... I'm not even sure if sqlight supports that at all... How would this approach handle such issues?

Arnon Marcus

unread,
May 17, 2013, 4:44:06 AM5/17/13
to web...@googlegroups.com
Another issue:
Testing controller-actions is not considered unit-testing, as it relies on an external dependancy that is supposed to be generated by the framework.
In web2py it's even worse - you have to prepare an entier execution environment, as is done in this experiment... Generally, treating these kinds of tests as unit-tests is considered a mistake. They should be researven for the much-less-frequent integration-tests, and then use actual template-parsing. This is why it is suggested that such code should be extremely short, and have it mostly call other modules.
If you really want to treate your controllers as viable for unit-testing, than the environment should be mocked/faked, and not constructed using the actual framework. Also, the templates should not be used - the tests should recieve whatever the templates would recieve in production.
Then, for unit-testing the templates, a mocked-out representation of the environment and controller-output should be used, but using the actual templating-engine.
What are your thoughts on that?

Arnon Marcus

unread,
May 17, 2013, 6:19:16 AM5/17/13
to web...@googlegroups.com
Using this, would it mean that no file is generated in the file-system?
Does this mean that all that temporary-folder/file jazz would not be required?
In that case, you could even not have to clear-out the tables from the previous test-run, as there wouldn't be any, right?
But it should still require clearing-out data between-tests, right?
And what about the schema-log file? Would that still have to be created? It would seem redundant, as there would not be any use for it - unless consecutive-tests are modifying the schema, and committing.

How would the life-cycle of such a db-object be managed?

If every test re-launches the model that defines the db-object, than it would be created from scratch every time - in this case no schema-log is needed at all, but this could be slower computationally - though the lack of access to the file-system may override that shortcoming and actually make it faster(?)
And also conditional-schema-creation may improve performance as well, as because the db-schema is regenerated on each test, than each one could only construct the tables it actually needs.

If the db-object is reused across-tests, and conditional-schema-creation is applied, than how would the migrations happen? In this case it would need a schema-log, right? That may override the benefit of the conditional-creation, by having it need to access the file-system... So if the db-object is still reused, but no conditional-creation is applied, than it might take a bit more for the set-up faze, as it would generate all tables, but than a schema-log could really be un-necessary - this could give the best of both worlds - no re-creation of tables, AND no file-system access - right? Can that be accomplished? I mean, generally, the way to avoid using a schema-log, migrations are needed to be turned-off. But Wouldn't athat prevent the dal from being able to even create the tables in the first-place?


On Thursday, May 16, 2013 10:37:09 AM UTC-7, Anthony wrote:

Anthony

unread,
May 17, 2013, 7:33:11 AM5/17/13
to web...@googlegroups.com
On Friday, May 17, 2013 4:23:39 AM UTC-4, Arnon Marcus wrote:
I have some more questions about using an alternative database for testing.

What would happen to the schema-log file?
Wouldn't having the same model-code using 2 different databases, mess up the log and brake automatic-migration capability?
I mean, you could turn migration off for when testing, but wouln't that prevent the model from being able to actually create your tables?
Or does web2py manage different logs for each connection-url?

Same as how web2py would handle multiple databases in the same app in general -- see http://web2py.com/books/default/chapter/29/06#Migrations. By default, the *.table filenames are pre-pended with a hash of the db connection string (or you can manually name each .table file in the call to .define_table).
 

And then there is the issue of having differences between schema definitions using different databases, for example the way booleans are implemented, or having missing features in the dal, like supporting multy-column unique-constrains... I'm not even sure if sqlight supports that at all... How would this approach handle such issues?

Obviously if you need to test database specific features, you'll need a copy of the specific database you want to test.

Anthony
 

Anthony

unread,
May 17, 2013, 7:36:30 AM5/17/13
to web...@googlegroups.com
On Friday, May 17, 2013 6:19:16 AM UTC-4, Arnon Marcus wrote:
Using this, would it mean that no file is generated in the file-system?
Does this mean that all that temporary-folder/file jazz would not be required?
In that case, you could even not have to clear-out the tables from the previous test-run, as there wouldn't be any, right?
But it should still require clearing-out data between-tests, right?
And what about the schema-log file? Would that still have to be created? It would seem redundant, as there would not be any use for it - unless consecutive-tests are modifying the schema, and committing.

The DAL doesn't generate any *.table files for "sqlite:memory" databases.

Anthony

Arnon Marcus

unread,
May 17, 2013, 8:07:43 AM5/17/13
to web...@googlegroups.com
That's good news!

Now the only question that would remain, is weather this means that test-performance using this, would be fast enough for that to be considered fitting for interactive-TDD...

Otherwise the dal-using-code would still be better-off 'mocked' away...

Vinicius Assef

unread,
May 17, 2013, 8:53:01 AM5/17/13
to web2py
Arnon, how many use cases does your application have?

Is time to run tests really a bottleneck in your case?

Arnon Marcus

unread,
May 18, 2013, 6:48:51 AM5/18/13
to web...@googlegroups.com

Arnon, how many use cases does your application have?

Is time to run tests really a bottleneck in your case?


I'm not sure I know how to answer this question, since we have been working on our code for more than 3 years now, and there is zero testing in it, currently (which makes me nervous  obviously...), as testing was never given any thought or priority in our development. I am, as I said, in the process of learning this sub-field and evaluating options, and gathering material to use for "selling" the importance of testing to my supperiors so we can get it into our schedule - we're an animation studio, not a software-company, and it's mainl just me and another single-developer, and we were both new to python when we started this, so this is how it kind of grew out of necessity...
But a lot of the code is actually javascript/css, so it's a difficult question for me to think about as of now.
We do have some thousands of lines of python-code, and our system has grown to be really large already.
We don't yet have a formal process in our development (which is another worry of mine), so I can't really know how many use-cases we have, because we haven't counted them, yet...
I would assume it's in the low-hundreds already, though, and going to be pushing to high-hundreds in less than a year from now (maybe even more than a thousand, we'll see), so we are talking about a considerable number of tests, in any case.

BTW, Here is a really awesome talk about test ability:

Using his metric, we should have rougly the same amount of tests-code as there is actual software-code, so by that measure, we are probably talking about roughly 5,000 lines of python-code alone, already, so it's going to be a substantial code-base of tests. And the use-cases are really varied, in that code, if talking 'coverage' metrics, there are many branches in most functions, so the amount of tests might get multiplied...

So to sum it up, we are probably talking about many hundreds of tests that we're going to need to write in the following year, so the answer would probably be "yes" for "is performance going to be an issue".... 

Arnon Marcus

unread,
May 18, 2013, 6:54:21 AM5/18/13
to web...@googlegroups.com
Here is a really great talk about testing in general, ans testing web-apps in particular:

Mika Sjöman

unread,
May 19, 2013, 3:23:35 AM5/19/13
to web...@googlegroups.com
Hi

I really badly want to get into testing because right now I have an application breaking all the time because we do manual QA. Is there any video guide on selenium with web2py, except the one at killerwebdevelopment.com, because I could not get selenium working when I tried.

I would really like to get started with selenium but it seems so difficult to get started.

Cheers

Niphlod

unread,
May 19, 2013, 9:21:01 AM5/19/13
to web...@googlegroups.com
me and vinicius are working on having a base start for testing applications with web2py.

PS: in either way, web2py will not help you write tests with selenium, web2py will just provide a simpler integration for running whatever test you like (i.e. if you chose selenium you will have to write tests with it, web2py will just provide a simple "launcher")

Arnon Marcus

unread,
May 19, 2013, 7:31:52 PM5/19/13
to web...@googlegroups.com

Arnon Marcus

unread,
May 19, 2013, 7:48:46 PM5/19/13
to web...@googlegroups.com
What is the currently suggested way of using pytest/nose in web2py?
This current thread is of a WIP script, and I have already seen better approaches,
I've seen aome pytest-like and/or nose-like tests of web2py itself within it's folders, and there was a comment in the shell-script that this should work in both nose and pytest.
I've also taken a look at many implementations of devising environments for testing in web2py.
But these seems to be quite a lot of diereses-fragmentation on this issue...
The most promising option I found was the one in the old web2py_utils project, but it's really old and this project seems to have died a long time ago... right?

It seems that there is a growing need for a centralized place presenting a canonical way using nose/pytest and devising an optimal environment for testing using them. It should also be pythonic, and not a shell-script one might accidently stumble across... Or a corpse-of-a-project that one might find himself arrive to somehow.... Or some varied fragmented experiments diapered around...

Where is the "ggod stuff"?
Where is the "best-practice"?
Where is the "batteries-included"?

This should be trivial, nowadays, and be well documented in the book or somewhere... The web-interface-doc-test is not a viable option for large-scale applications - it is more and more becoming considered as un-profetional to treat doc-tests as an option... They should reserve their original purpose - making sure you don't document your code with buggy examples.... But that's it....

As for the standard-library unittest/2... It's almost mid-2013 -  let's move along, shell we?
But even for that, there should be some centralized documentation of a canonical practice...

Vinicius Assef

unread,
May 20, 2013, 1:23:13 AM5/20/13
to web2py
On Sun, May 19, 2013 at 8:48 PM, Arnon Marcus <a.m.m...@gmail.com> wrote:
>
> Where is the "ggod stuff"?

The "good stuff" is in our hands and heads. We can join together to
help make it happen.
That's the beauty of open source. Join us and help us develop this
kind of thing, if what exists today doesn't fit your needs.


> Where is the "best-practice"?

There's no Web2py "best-practice" concerning this subject, yet.
As Niphlod wrote: we're working on it and web2py.test [1] is a path.


> Where is the "batteries-included"?

They're not "charged" neither "installed" yet.


But, what have you tried until now?
Did you try web2py.test?
Have you had some success? Some problem?
Do you have some suggestion to improve it?
Do you have some advice?
How can you help us to evolve this WIP?

[1] http://github.com/viniciusban/web2py.test

--
Vinicius Assef

Arnon Marcus

unread,
May 20, 2013, 6:37:09 PM5/20/13
to
<Edit : confused rant removed... I thought web2py.test was something-else for a moment - If you saw that before this update, please ignore it > 

I hate to be the 'nagging' persona here, but really, my problem is not about 'working out a solution". My problem is the wild-west reality I have been facing while researching this. it's a documentation/organisational issue, not a technical one. I wouldn't know what to answer about the technical stuff - I would expect there to be much more knowledgeable people than me on these issue that would have much better answers than what I would be able to come up with... I just would like to have had such people oriented towards documenting this, or at the very least have some topic, or even a 'mention' about this in this user-group. If nothing else, so people would know that there is something in the works, and not be working independently on their own implementation, completely blind to what is going on with this. The lack of exposure about this is generating a acumen out of which duplication-of-efforts emerge - this is inefficient.
The same goes for research-resources that are duplicated and wasted - I'm sure I am not the only one who has gone through this research.
There needs to be better management and effort-synchronization, as well as better transparency about "things that are in the works", and a centralized 'go-to' place for looking such-things up. A section in the main web2py website, for example, a news-feed, an rss, a mailing-newsletter, I don't know, "something"... The book is awesome  but it's a reference-manual, and it's already way too long - more than 600 pages on a PDF...
And it is not updaed as often as things come-up - it's the wrong tool for the job - there needs to be more kinds of documentation-options, some-of-which should be much more frequently updating.

Arnon Marcus

unread,
May 20, 2013, 6:52:48 PM5/20/13
to web...@googlegroups.com
Just as an example to how the absent of communication has led to porrer-and-unnecessary implementation(s):

In we2py_utils, I've seen Thadeus used 'sqlight:memory' in the connection-string. This completely circumvent the entire issue that many implementations I had seen have been hammering on - the issue of having to clear-out the fixtures on each test-session-setup, and the issue of having an sqlight-file locked by another process. When you use an in-memory instance of sqlight in your setup-implementation, you are completely avoiding such issues, AND it's faster AND simpler.

A centralized-location for consolidation of such conclusions would have prevented this duplicated-and-redundant-effort from occuring in the first place. The web2py_utils project is from 2011, so this means that solutions to issues have been missed by some people, even though (or perhaps "because") they had been done a long time ago.

Vinicius Assef

unread,
May 20, 2013, 7:11:32 PM5/20/13
to web2py
web2py.test is not mentioned in the book because it's not part of
Web2py. It's a project to make testing app easier in Web2py.

Yes, it's a personal project (by myself), but it's not a nice learning
experience. It's a work in progress to help people develop test driven
apps in Web2py. It's been used in one pilot project, and it's not
labs. It's for real work.

Will it be part of Web2py someday? Maybe. Maybe not. I'm working on it
addressing some concerns. If core developers think it's useful, it
will be part of Web2py and will be mentioned in the book, as well.

But, for now, people can use it and suggestions are welcome.





On Mon, May 20, 2013 at 7:25 PM, Arnon Marcus <a.m.m...@gmail.com> wrote:
> You see what I mean?
> I had no idea about this thing...
> And that is after about a full week's worth of time of research.
> This is very telling I think, and is exactly my point.
>
> I hate to be the 'nagging' persona here, but really, my problem is not about
> 'working out a solution". My problem is the wild-west reality I have been
> facing while researching this.
>
> Is web2py.test in the book?
> Where should I have 'stumbled' upon it?
> How is it that I have managed to miss it completely?
> I ran searches in the book, in this group for the keywords "nose", "pytest"
> "py.test", "testing", and the like, as well as some google searches, and
> meticulously read through almost all of the search-results that came up in
> all cases - and now, by sheer serendipity, you are presenting me with this
> thing that I somehow have managed to miss entirely...
> Does this sound like a nice learning experience to you?
> This is the issue at hand - it's a documentation/organisational issue, not a
> technical one. I wouldn't know what to answer about the technical stuff - I
> would expect there to be much more knowledgeable people than me on these
> issue that would have much better answers than what I would be able to come
> up with... I just would like to have had such people oriented towards
> documenting this, or at the very least have some topic, or even a 'mention'
> about this in this user-group. If nothing else, so people would know that
> there is something in the works, and not be working independently on their
> own implementation, completely blind to what is going on with this. The lack
> of exposure about this is generating a acumen out of which
> duplication-of-efforts emerge - this is inefficient.
> The same goes for research-resources that are duplicated and wasted - I'm
> sure I am not the only one who has gone through this research.
> There needs to be better management and effort-synchronization, as well as
> better transparency about "things that are in the works", and a centralized
> 'go-to' place for looking such-things up. A section in the main web2py
> website, for example, a news-feed, an rss, a mailing-newsletter, I don't
> know, "something"... The book is awesome but it's a reference-manual, and
> it's already way too long - more than 600 pages on a PDF...
> And it is not updaed as often as things come-up - it's the wrong tool for
> the job - there needs to be more kinds of documentation-options,
> some-of-which should be much more frequently updating.
>
>
>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "web2py-users" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/web2py/CHfZTr5xHso/unsubscribe?hl=en.
>> To unsubscribe from this group and all its topics, send an email to

Anthony

unread,
May 20, 2013, 9:40:34 PM5/20/13
to

Just as an example to how the absent of communication has led to porrer-and-unnecessary implementation(s):

In we2py_utils, I've seen Thadeus used 'sqlight:memory' in the connection-string. This completely circumvent the entire issue that many implementations I had seen have been hammering on - the issue of having to clear-out the fixtures on each test-session-setup, and the issue of having an sqlight-file locked by another process. When you use an in-memory instance of sqlight in your setup-implementation, you are completely avoiding such issues, AND it's faster AND simpler.

Note, web2py_utils is linked from http://www.web2py.com/init/default/documentation, so it is discoverable, and it's not clear that others who have implemented testing solutions were unaware of it (in fact, several solutions do seem to reference and build upon each other). There's also no reason other solutions couldn't use sqlite:memory if desired. In some cases, though, you might want to have a test database populated with real data, so you wouldn't want to use sqlite:memory.

It's not clear what you're looking for, though. You don't seem to think information about testing belongs in the book, but I don't see why not (particularly since there are some complications introduced by the web2py execution environment). Aside from that, you seem to want to know what other people are working on and what they have learned. Often when people have learned something, they post it on web2pyslices, and there are a couple of recipes there related to testing. That's generally the place for sharing solutions. As for what people are working on, certainly they can and do make announcements here and on the developers' list. What else should there be?

Anthony

Arnon Marcus

unread,
May 23, 2013, 12:47:35 PM5/23/13
to web...@googlegroups.com
On the contrary. I think information about testing using web2py, in conjuction with various testing-frameworks/tools, is highly relevant in the book, along with common testing-practices, and the way they apply when testing with web2py.

The book, in that case, would act as an information-centralization tool. So it's not about the book. Its about information-centralization/consolidation, for the sake of research-efficiency, and prevention of duplication-of-efforts. There may be other tools/platforms that can searve this role.

The book might be a less-efficient way than others, in terms of how frequent it is updated.

I am deliberatelty refraining from specific suggestions, because the actual solution-implementation is less important than understanding the problem. The need is more important than the strategy for meeting it.

Where I think a book is a terrible option, is when concearning exposure of frequently-updating information. Say, announcement of a feature-project that is underway, This should belong to a "news-feed", a newsletter, or both.

The 2 worlds might meet, say, as an announcement for additions to the book, with links to the chapters.

The FAQ is really old and dated, so I think it should be updated as well. And it uses some usefull categories, that should be retargeted to a newsfeed.

Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Anthony

unread,
May 23, 2013, 2:02:15 PM5/23/13
to web...@googlegroups.com
We can update the book as frequently as we like. I think this is the place for announcements (there's also the Twitter feed). Aside from that, I suppose we could maintain some kind of framework roadmap document, but we don't really have a formal roadmap process, and I'm not sure there is a desire to adopt one.

Anthony

Arnon Marcus

unread,
May 24, 2013, 9:58:08 AM5/24/13
to web...@googlegroups.com
There doesn't necessarily have to be a formal-road-map "process" in existence, for there to be a "road-map-section"  in the web2py website.
For example, I like how Redmine's road-map section is structured:
There is also an explanation on updating it on the wiki tab:
I think web2py should have something similar.

As for announcement, I disagree - That's another duplication-of-efforts and multiplication that creates confusion - if there is already an maintained tweeter-feed, that it should be used - embedding a tweeter feed into a website is common and trivial nowadays - people expect it. There should be a tweeter-feed component right on the front-page of the web2py website. 

Book-updates should be linked-into from that tweeter-feed.
Announcements should be short and frequent.
Book-updates should be extensive and read-proofed.


--

Anthony

unread,
May 24, 2013, 10:54:13 AM5/24/13
to web...@googlegroups.com
There doesn't necessarily have to be a formal-road-map "process" in existence, for there to be a "road-map-section"  in the web2py website.
For example, I like how Redmine's road-map section is structured:

I think you would need a formal process to determine which items will make it into which releases.
 
As for announcement, I disagree - That's another duplication-of-efforts and multiplication that creates confusion

Not sure what you mean about duplication. If you have something to announce, post about it here.

- if there is already an maintained tweeter-feed, that it should be used - embedding a tweeter feed into a website is common and trivial nowadays - people expect it.

Well, it's probably not quite active enough now to justify, but that sounds fine. Note, the feed is currently displayed on the home page of the admin app.
 
Book-updates should be linked-into from that tweeter-feed.

For now, you can subscribe to the feed for the repo commits: https://github.com/mdipierro/web2py-book/commits/master (you can also subscribe to commits to the main web2py repo as well as to postings to the Google Code issues list if you're interested). You wouldn't want that automated to Twitter, though, as many are just fixing typos or making small changes.

Keep in mind, this whole thing is run by volunteers (and not volunteers who are paid by a company to "volunteer," as with some open source projects), so you might have to temper your expectations a bit. In many cases, if you want something done, you'll likely have to be the one to do it (or at least put significant effort into initiating it). The fact that it hasn't already been done likely means that no one else has yet been willing to make the effort. So, if you want a Twitter feed on the home page, send Massimo a patch for the "examples" app. If you would like timely, centralized announcements, maybe volunteer to coordinate and manage that.

Note, I don't want to discourage you or anyone from making observations about pain points or suggestions for improvement -- constructive feedback is always helpful. Just be mindful about how you present the feedback, given that you are asking volunteers to give up more of their time to make your life easier. :-)

Anthony

Arnon Marcus

unread,
May 24, 2013, 2:56:20 PM5/24/13
to web...@googlegroups.com
Well, the way I understand it, the admin-app is a web2py app, and so is the examples-app - which is the we2py website, so assuming the admin-app uses a component for that tweeter-feed, then including it in the web2py website should be as trivial as adding it to the examples-app. Is this what you mean? If so, why should I bother figuring out how to copy that component so it welds-in well with the examples? Shouldn't the ideal person to do that would be the person who writes/maintains these apps? It would probably take him a fraction of the effort it would take me to do that...

As for coordination, I see what you're getting at with the temper thing, but is it really that much of a stretch to expect an open-source project, even a purely-volunteering-base one, to already have coordination-roles assigned to contributors?

I have zero experience and/or interest in being a coordinator for an open-source project, but I am sure there are many people, even in this community, who would be better suited. Some project-manager who knows enough existing members, should be suited for guessing who would fit the role, wouldn't he?
I have no experiencing in participating in any open-source project, but I did participate in a volunteering-based project which was not software-related, and it usually works like this there...


--

Anthony

unread,
May 24, 2013, 3:23:39 PM5/24/13
to web...@googlegroups.com
Well, the way I understand it, the admin-app is a web2py app, and so is the examples-app - which is the we2py website, so assuming the admin-app uses a component for that tweeter-feed, then including it in the web2py website should be as trivial as adding it to the examples-app. Is this what you mean? If so, why should I bother figuring out how to copy that component so it welds-in well with the examples? Shouldn't the ideal person to do that would be the person who writes/maintains these apps? It would probably take him a fraction of the effort it would take me to do that...

You are of course free to ask Massimo or anyone to add the Twitter feed to the "examples" app. If they don't get around to it, though, then that leaves you to do it, even if it takes you many times longer than it would have taken someone else. We can't expect Massimo to do everything just because he can do it faster than most others. I bet in the amount of time you have spent discussing this topic here you probably could have managed it yourself. You might even learn something in the process, making you faster on the next task -- that's how people go from being novice to expert.

As for coordination, I see what you're getting at with the temper thing, but is it really that much of a stretch to expect an open-source project, even a purely-volunteering-base one, to already have coordination-roles assigned to contributors?

It's not as if no one is doing anything -- it's just that no one is doing the particular thing you want in the way you want it.
 
I have zero experience and/or interest in being a coordinator for an open-source project, but I am sure there are many people, even in this community, who would be better suited.

Perhaps, but if they're not interested (or can contribute more value to the framework in some other way), that may just leave you. If you're not interested, why should anyone else be?

 
Anthony

Arnon Marcus

unread,
May 25, 2013, 6:06:12 AM5/25/13
to web...@googlegroups.com

Perhaps, but if they're not interested (or can contribute more value to the framework in some other way), that may just leave you. If you're not interested, why should anyone else be?

Because other people are different than me. :)
They might like this sort of thing.
Pffff.... Don't even get me started with Adam-Smith's BS....
I liked this quote:
"...comparative advantage is a metaphysical assumption, rather than a discovery..."

I will only gain better advantage learning thing in order to change them, if the amount of effort I would invest in the practice of learning, would be negated by the value I gain from "consecutive" changes I later make. It "pre-assumes" that there would be such changes...
Long-term investment are only profitable in "iterative" occurrences, in which each consecutive occurrence gains more benefit, reducing the overall impact of the up-front investment. In case of a one-off action one needs to take, the invested-effort becomes unprofitable.

In short, for people who are/plan-to-be "contributors" for an open-source project (such as yourself), you are absolutely right.
As for the rest of us, though...  

Arnon Marcus

unread,
May 25, 2013, 6:15:38 AM5/25/13
to web...@googlegroups.com

I think you would need a formal process to determine which items will make it into which releases.

How about a poll-voting thing?

Not sure what you mean about duplication. If you have something to announce, post about it here.


That's even worse...
Now you are suggesting a 3'rd avenue...
What I mean by "duplication", is that you have multiple places that need to get updated every time an announcement is in order.
By your suggestion, let's take the recent 2.4.7 release, that Massimo just posted here - He would then have to post it also in the tweeter-feed, as well as update the book.
That's a duplication of effort, on his part.

But it get's worse - users of web2py (developers of applications), would have multiple places they need to keep track of... Maybe not all updates are being announced in one of the options? That's what I mean by "multiplicity".

I think in the long-run, the one-off effort of adding the tweeter-feed component to the example-app/web2py-website, and announcing here that this should be the place to look for announcements, than it would both save effort for Massimo and the rest of you developers, as well as save research-effort on the side of web2py's users.
So, overall, a very profitable investment for all.

As for convenience-of-discovery of announcements, an RSS feed may also be in order - which is also trivial using web2py.

Anthony

unread,
May 25, 2013, 7:20:27 AM5/25/13
to web...@googlegroups.com

Not sure what you mean about duplication. If you have something to announce, post about it here.


That's even worse...
Now you are suggesting a 3'rd avenue...
What I mean by "duplication", is that you have multiple places that need to get updated every time an announcement is in order.
By your suggestion, let's take the recent 2.4.7 release, that Massimo just posted here - He would then have to post it also in the tweeter-feed, as well as update the book.
That's a duplication of effort, on his part.

I'm suggesting making all announcements here. Updating the book is a separate activity altogether and therefore not a duplication of effort. Twitter is an additional outlet but should not be the primary one given its 140 character limit.

Anthony

Anthony

unread,
May 25, 2013, 7:49:49 AM5/25/13
to web...@googlegroups.com

Perhaps, but if they're not interested (or can contribute more value to the framework in some other way), that may just leave you. If you're not interested, why should anyone else be?

Because other people are different than me. :)
They might like this sort of thing.

That's my point -- if others are not doing what you want, that is likely because they don't want to (i.e., they are not different from you). If they like this sort of thing, then they'll do it.
  

Pffff.... Don't even get me started with Adam-Smith's BS....
I liked this quote:
"...comparative advantage is a metaphysical assumption, rather than a discovery..."

I will only gain better advantage learning thing in order to change them, if the amount of effort I would invest in the practice of learning, would be negated by the value I gain from "consecutive" changes I later make. It "pre-assumes" that there would be such changes...
Long-term investment are only profitable in "iterative" occurrences, in which each consecutive occurrence gains more benefit, reducing the overall impact of the up-front investment. In case of a one-off action one needs to take, the invested-effort becomes unprofitable.

I wasn't suggesting that you personally would gain more than you give by making contributions -- you won't. I just meant that relative to someone like Massimo, you have a comparative advantage in easy tasks over hard tasks, even though you are less efficient at both. Suppose Task A takes an expert 1 hour and you 4 hours, and Task B takes an expert 2 hours and you 16 hours. You have a comparative advantage in Task A, so you should do it, even though it takes you longer. You doing Task A and the expert doing Task B will take only 6 hours of community time, rather than 17 hours of community time with the opposite assignments (in the extreme, suppose you don't know how to do Task B at all, so it doesn't even get done). Put more simply, if less expert users work on some of the easier tasks, that frees up the more expert users to handle the harder tasks for a greater net benefit to the community.

Anthony

Arnon Marcus

unread,
May 25, 2013, 5:04:40 PM5/25/13
to
Ok, I get it now. But still, the most efficient way would be that the people with the most experience in a given area, would be the ones to maintain it. You are putting a restriction on scgedule that does not apply here. Within a time-frame that is smaller than what an experienced person has available, than you are right. But as you said, web2py does not have a formal release-cycle, and there is currently no incentive in existence for defining one. So, in other  words, your example is correct, only when applied to a scheduled-project, which web2py clearly is not, by your standards. So in that case, experienced person B would be most suited for doing both assignments. The overall community-time invested would be only 4 hours. It may take longer to complete schedule-wise, but that has no relevance to an un-scheduled project. It may take Massimo even a few months to get to that, for all I care, it would still remain more efficient.

Arnon Marcus

unread,
May 25, 2013, 4:25:39 PM5/25/13
to web...@googlegroups.com
I think that 140 charcters is more than sufficient for announcements. I also think that one may be able to demostrate that the tweeter-community at large would agree with this assertion.

Anthony

unread,
May 25, 2013, 9:04:11 PM5/25/13
to
Ok, I get it now. But still, the most efficient way would be that the people with the most experience in a given area, would be the ones to maintain it. You are putting a restriction on scgedule that does not apply here. Within a time-frame that is smaller than what an experienced person has available, than you are right. But as you said, web2py does not have a formal release-cycle, and there is currently no incentive in existence for defining one. So, in other  words, your example is correct, only when applied to a scheduled-project, which web2py clearly is not, by your standards. So in that case, experienced person B would be most suited for doing both assignments. The overall community-time invested would be only 4 hours. It may take longer to complete schedule-wise, but that has no relevance to an un-scheduled project. It may take Massimo even a few months to get to that, for all I care, it would still remain more efficient.

Sorry, but this doesn't make any sense. Just because there isn't a formal release schedule with particular features promised by a particular date does not mean there is no benefit to having features available sooner rather than later. By your reasoning, vaporware has the same value as real software.

Anyway, I think you have made it clear that you would rather not make any contributions but instead limit yourself to requesting that other people spend their time building things for you for free.

Anthony

Anthony

unread,
May 25, 2013, 9:07:52 PM5/25/13
to web...@googlegroups.com
On Saturday, May 25, 2013 4:25:39 PM UTC-4, Arnon Marcus wrote:
I think that 140 charcters is more than sufficient for announcements.

Only for announcements that include links to the real announcement.
 
I also think that one may be able to demostrate that the tweeter-community at large would agree with this assertion.

Sounds like that would be a biased sample.

Anthony 

Massimo Di Pierro

unread,
May 25, 2013, 11:18:30 PM5/25/13
to web...@googlegroups.com
Hello Arnon,

let me explain how we operate. First of all we all volunteers so we try not to commit to promises we cannot maintain. For the same reason we do not feel like putting pressure on those volunteers.

web2py evolves more like natural evolution than by intelligent design. If a feature is really necessary than doable, then we do find enough resources to get it done. We do this by small steps and not by large jumps. Large jumps have happened and will continue to happen but in my experience, they do not happen because we put them on a roadmap.

In general our evolutionary changes feel the following pressure forces:
- they should add functionality
- they should do in general way
- or they should reduce the code size
- they should make the code faster, never slower.
- must be backward compatible

Very few of the patches I receive have been agreed on. People play with the code, find ways to improve it, email the changes to me. I check the 5 rules above are met, maybe do some quality control on the code, and eventually approve it.

If something is needed, than there should be enough people working on it already. If there is no critical mass working on something than probably, there is not enough interest.

We have polled people before but it was always about how to implement a new feature, not whether to implement a new feature. This is because developers do want feedback from the users but if the needs do not come from developers, things will not get done.

There are other models. Certain features have been sponsored. For example the SahanaPy Foundation has payed developers to add some core features they needed, including for example the geo dal.

I think a roadmap will be useful. It should contain list of desired features. A discussion of feasibility and a clear statement from me (and other core developers) about priorities. But it should not contain deadlines. I do not think the roadmap should come from me. I am satisfied with what I have. I think it should come from users, so that I and others learn more about users needs.

Massimo

Arnon Marcus

unread,
May 26, 2013, 5:00:17 AM5/26/13
to web...@googlegroups.com

Only for announcements that include links to the real announcement.
 

Let's see:
(54 characters) 

2.4.7 is out:
pypy support, thanks Niphlod
more bug fixes
(57 characters)

Obviously, for more verbose change-logs, a link can be used.
But the change-log update is not a product of the announcement - it's gonna have to happen anyways - so it's not a "redundant-duplication-due-to-process", more like a "convenience-detailing, if possible"...
 

Sounds like that would be a biased sample.

We can test our hypotheses, if you like, but I've just given a practice real-world use-case example that works...

Arnon Marcus

unread,
May 26, 2013, 5:10:39 AM5/26/13
to web...@googlegroups.com
Thanks for explaining, Massimo.

However, it seems that our common interest for a road-map, may not fit the way you operate - as you said, if developers don't need a feature, it will not be written.
This nulls the possibility of web2py developers answering the needs of web2py users.

I agree that a road-map should not contain deadlines - that makes sense (I hate deadlines...:) ) - this way more efficient usage of man-power would be possible, as the "eventual existence" of a well-written feature, is in most cases of higher-priority to users than a poorly-written-immediate-availability of that same feature. This way, the person with the most experience/knowledge of a given section of the code, would be the one to develop that feature, however long it may take for him to get to it.
That was the point I tried to convey to Anthony - it feels like we see more eye-to-eye on this point.

Anthony

unread,
May 26, 2013, 8:11:43 AM5/26/13
to web...@googlegroups.com
Arnon, once again, we agree. It is true that some announcements are under 140 characters and that all announcements could be made by including a link to a longer announcement. But then you need a separate place for the linked longer announcements (such as Google Groups). Note, this very thread started with an announcement that is well over 140 characters.

Anthony

Anthony

unread,
May 26, 2013, 9:09:05 AM5/26/13
to web...@googlegroups.com

However, it seems that our common interest for a road-map, may not fit the way you operate - as you said, if developers don't need a feature, it will not be written.
This nulls the possibility of web2py developers answering the needs of web2py users.

Arnon, I think you are somewhat missing the point. This is not necessarily true of all open source projects, but at least with web2py, the users are the developers. Of course, not all users are developers, but contributions to the framework (as well as other aspects of community maintenance) come from users. As Massimo suggested, if some feature is really important or broadly useful, some user or users will end up working on it, likely because they themselves need it and are willing to put in the extra effort to generalize the solution. If nobody is willing to work on a particular feature, it is likely because there just isn't a strong enough need for it. If you're the only guy asking for something, don't expect someone else to do it for you.
 
I agree that a road-map should not contain deadlines - that makes sense (I hate deadlines...:) ) - this way more efficient usage of man-power would be possible, as the "eventual existence" of a well-written feature, is in most cases of higher-priority to users than a poorly-written-immediate-availability of that same feature. This way, the person with the most experience/knowledge of a given section of the code, would be the one to develop that feature, however long it may take for him to get to it.
That was the point I tried to convey to Anthony - it feels like we see more eye-to-eye on this point.

The point you conveyed was that you were unwilling to spend your time on a relatively easy task because you thought someone else could do it faster (though, it's starting to sound like you wouldn't be willing even if you could do it just as quickly, as you consider yourself to be a mere "user" who expects "developers" to respond to your needs). I have not suggested that we should prefer poorly-written features over well-written features merely to get the features sooner. I was actually making the opposite point -- that less expert users should work on the easy tasks (that they can do), even if it takes them a bit longer, leaving the more expert users to work on the harder tasks. And of course, there are always trade-offs -- we might prefer a competent but less sophisticated implementation next week over a comprehensive and complex implementation a year from now, particularly since the two options are not mutually exclusive. There is room for people of varying abilities to make contributions. If you don't want to be one of them, that's fine.

Anthony

Arnon Marcus

unread,
May 26, 2013, 3:47:28 PM5/26/13
to web...@googlegroups.com
I think that we fail to communicate because we have different un-spoken assumptions. Let's take the following sentence as an example for what I mean by that:
" less expert users should work on the easy tasks (that they can do), even if it takes them a bit longer, leaving the more expert users to work on the harder tasks"

What might be an un-spoken assumption in this sentence?
"There is a "shortage" of time/effort for developing both easy and complex solutions - One "must" come "in the expense" of another."
In other words, there is a "GIVEN" time or amount of effort that "experienced" developers have to spare/contribute. Within that given time, an experienced-developer may "either" work on a complex task "or" a set of easy ones - he can not do both.

I think the problem is not with your logic. The problem is with this assumption.
As there is no formal road-map, then there "IS NO GIVEN" time in existence that can be pointed at. Therefore, using the word "leaving" in "leaving the experienced developer..." has no actual meaning in this context.

A developer may "think" he has more tasks that he would like to accomplish, than spare-time to do them all, but that's a cognitive-illusion on his part.
There can only be shortage of spare-time if there is a schedule to frame the time-context. If there is no schedule in existence, than there is no time-frame, and therefore no shortage, and any derivative-term that assumes shortage looses it's meaning.

As for users=developers - yes you are right. I may have a different expectation than what in expected/accepted in the web2py community. That would be a flaw of communication on the part of the web2py community/developers/maintainers/managers etc.

If web2py has a different set of  meaning to common-terminology compared to other open-source communities, then this information should be explicit and up-front - say, on a sticky-thread on this forum and/or a document explaining this anywhere else. As long as it is NOT explicit/up-front, it would be erroneous of any well-versed web2py contributor to assume this knowledge on the part of other web2py "whatever"s... If anything, the correct assumption should be the opposite.


--

Anthony

unread,
Jun 3, 2013, 11:23:02 AM6/3/13
to
Arnon, it is more than a little frustrating having a discussion with you, as it appears you don't really pay much attention to what others are saying. If you go back, you will see I have already addressed your argument. You are correct that we make different assumptions, but the differences are not where you think. My only assumption is that it is better to have a given feature sooner rather than later. On the contrary, you seem to be completely indifferent between having a feature available now and having it available at some indefinite future time. If you can't see why it would be better to have software available sooner rather than later (or at least comprehend why most other people would have that preference), then I'm afraid we are at an impasse.

As for users=developers - yes you are right. I may have a different expectation than what in expected/accepted in the web2py community. That would be a flaw of communication on the part of the web2py community/developers/maintainers/managers etc.

No, I believe the flaw lies elsewhere in this case. I will leave it as an exercise for you to figure it out. ;-)

Anthony

Richard Vézina

unread,
Jun 3, 2013, 10:46:24 AM6/3/13
to web2py-users
Sorry to revive this old thread that get pretty long... But this one si for Vinicius...

I have some time to explore and test web2py.test you publish...

The only thing, I would know for now is how to use in memory if I use postgres in my app... Should I mock database with SQLite in memory as sugget Anthony? Does web2py.test manage the database replication or do I am responsible to create a subset of the main database for testing purpose?

Thanks.

Richard


On Sun, May 26, 2013 at 7:41 PM, Anthony <abas...@gmail.com> wrote:
Arnon, it is more than a little frustrating having a discussion with you, as it appears you don't really pay much attention to what others are saying. If you go back, you will see I have already addressed your argument. You are correct that we make different assumptions, but the differences are not where you think. My only assumption is that it is better to have a given feature sooner rather than later. On the contrary, you seem to be completely indifferent between having a feature available now and having it available at some indefinite future time. In your world, I suppose it would make sense to find the single most productive programmer on the planet, and let that person write all the software for all of humanity. Unfortunately, I think you're the only guy living in your world. If you can't see why it would be better to have software available sooner rather than later (or at least comprehend why most other people would have that preference), then I'm afraid you are beyond any help we can provide to you here.

As for users=developers - yes you are right. I may have a different expectation than what in expected/accepted in the web2py community. That would be a flaw of communication on the part of the web2py community/developers/maintainers/managers etc.

No, I believe the flaw lies elsewhere in this case. I will leave it as an exercise for you to figure it out. ;-)

Anthony

--
 
---
You received this message because you are subscribed to the Google Groups "web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web2py+un...@googlegroups.com.

Vinicius Assef

unread,
Jun 3, 2013, 12:19:37 PM6/3/13
to web2py
Hi Richard.

I don't know how to create an in-memory db with postgres, but if you
don't use any direct SQL command, you shouldn't have problems using
SQLite to test purposes.

No web2py.test doesn't manage any db replication. It just supply a
web2py environment usable to your tests. You should manage your test
data by yourself, as in any other test infrastructure.

I use in my tests gluon/populate.py/populate() to create test data to
help me with tests. It's really useful.

Richard Vézina

unread,
Jun 3, 2013, 12:24:22 PM6/3/13
to web2py-users
Thanks for answer... 

I will try web2py.test in the comming days and report experience here...

Thanks to share this...

Richard
Reply all
Reply to author
Forward
0 new messages