Database-dependant tests and third-party backends

191 views
Skip to first unread message

Aymeric Augustin

unread,
May 7, 2014, 12:40:08 PM5/7/14
to django-d...@googlegroups.com
Hello,

I'm trying to make the lives of maintainers of third-party database backends a bit easier.

I'm specifically interested in MSSQL backends. Unlike Oracle, it isn't supported by Django out of the box, but it's a very common database. The most robust implementation, django-mssql, is very close to passing Django's entire test suite in addition to its own test suite.

Currrently Django uses two solutions for database-dependant tests, both of which would require small adjustments to support third-party backends adequately.

1) Checking database features

This is the correct solution for database-dependant tests in general.

What do you think of adding feature flags that have the same value for all core backends but may be set to a different value by third-party backends?

With a comment explaining which backends use the flag and what it does, I find it acceptable. Then we would skip some tests based on the flag.

2) Checking connection.vendor

This is often inferior to feature flags.

Positive checking ("run this test only for SQLite") is reasonable. At worst, the test is skipped on third-party databases where it could pass.

Negative checking ("run this test on all databases except Oracle") is more problematic. If a third-party backend also fails the test, there's no easy way to ignore it.

Conditional code ("do X for vendors A, B and C, do Y for other vendors") is also problematic. All third-party backends will do Y, which may or may not be the right behaviour.

Would you object to adding "microsoft" to explicit checks for connection.vendor where appropriate?


Of cours, if that becomes too complicated to manage, that means we should switch to a feature flag ;-)

Thanks for your feedback,

--
Aymeric.

Shai Berger

unread,
May 7, 2014, 1:18:25 PM5/7/14
to django-d...@googlegroups.com
Hi,

On Wednesday 07 May 2014 19:40:08 Aymeric Augustin wrote:
>
> I'm trying to make the lives of maintainers of third-party database
> backends a bit easier.
>

+1.

>
> *1) Checking database features*
>
> This is the correct solution for database-dependant tests in general.
>

In most cases, it is. But in some cases, you need to differentiate on severely
idiosyncratic behaviors, which defy attempts to define them as anything but
"behaves like MySql".

> *What do you think of adding feature flags that have the same value for all
> core backends but may be set to a different value by third-party backends?*
>
> With a comment explaining which backends use the flag and what it does, I
> find it acceptable. Then we would skip some tests based on the flag.
>

Sounds good to me,

> *2) Checking connection.vendor*
>
> This is often inferior to feature flags.
>
> Positive checking ("run this test only for SQLite") is reasonable. At
> worst, the test is skipped on third-party databases where it could pass.
>
> Negative checking ("run this test on all databases except Oracle") is more
> problematic. If a third-party backend also fails the test, there's no easy
> way to ignore it.
>
> Conditional code ("do X for vendors A, B and C, do Y for other vendors") is
> also problematic. All third-party backends will do Y, which may or may not
> be the right behaviour.
>
> *Would you object to adding "microsoft" to explicit checks for
> connection.vendor where appropriate?*
>

That seems ugly to me, unless we actually add a core backend for SQL Server.
It feels like adding a dependency on 3rd-party projects.

I suggest, instead, that we solve the problem for real (so even Firebird can
enjoy it): Remove all vendor checks except for positive checking. Replace
conditional code with either conditions on feature flags, or refactoring into
separate test functions; and replace negative checks by an API allowing the
backends to filter the tests. So, instead of a test saying "I'm not running on
Oracle", let Oracle say "I'm not running these tests".

This is much more work, but gives a much nicer result. Also, it should be part
of the larger work -- a formal database backend API.

My 2 cents,

Shai.

Aymeric Augustin

unread,
May 7, 2014, 1:41:53 PM5/7/14
to django-d...@googlegroups.com
2014-05-07 19:18 GMT+02:00 Shai Berger <sh...@platonix.com>:
 
> *1) Checking database features*
>
> This is the correct solution for database-dependant tests in general.

In most cases, it is. But in some cases, you need to differentiate on severely
idiosyncratic behaviors, which defy attempts to define them as anything but
"behaves like MySql".

Yes, that's the (only) appropriate use case for connection.vendor checks.

> *Would you object to adding "microsoft" to explicit checks for
> connection.vendor where appropriate?*
>

That seems ugly to me, unless we actually add a core backend for SQL Server.
It feels like adding a dependency on 3rd-party projects.

It isn't exactly a dependency, but I see what you mean.
 
I suggest, instead, that we solve the problem for real (so even Firebird can
enjoy it): Remove all vendor checks except for positive checking. Replace
conditional code with either conditions on feature flags, or refactoring into
separate test functions; and replace negative checks by an API allowing the
backends to filter the tests. So, instead of a test saying "I'm not running on
Oracle", let Oracle say "I'm not running these tests".

Yes, that's the best solution. I'll audit connection.vendor checks and see how
many flags we need to introduce.

-- 
Aymeric.

PS: while we're there, we could also audit feature flags and deprecate
obsolete ones, like those for versions of SQLite from 10 years ago. 

Michael Manfre

unread,
May 7, 2014, 2:41:45 PM5/7/14
to django-d...@googlegroups.com
On Wed, May 7, 2014 at 1:18 PM, Shai Berger <sh...@platonix.com> wrote:
Hi,

On Wednesday 07 May 2014 19:40:08 Aymeric Augustin wrote:
> *1) Checking database features*
>
> This is the correct solution for database-dependant tests in general.
>

In most cases, it is. But in some cases, you need to differentiate on severely
idiosyncratic behaviors, which defy attempts to define them as anything but
"behaves like MySql".

There are times when it makes sense to have tests that are run only for a specific backend without being controlled by a database feature. These tests shouldn't intermingle with the rest of the test suite. More on this below.
 
> *What do you think of adding feature flags that have the same value for all
> core backends but may be set to a different value by third-party backends?*
>
> With a comment explaining which backends use the flag and what it does, I
> find it acceptable. Then we would skip some tests based on the flag.
 
This would make me very happy. It provides backends with a much cleaner way of supporting/testing different database versions and/or drivers.
 
> *2) Checking connection.vendor*
>
> *Would you object to adding "microsoft" to explicit checks for
> connection.vendor where appropriate?*

That seems ugly to me, unless we actually add a core backend for SQL Server.
It feels like adding a dependency on 3rd-party projects.

As a blanket statement, I object to explicit vendor checks. Adding "microsoft" would help me, as shown by Aymeric's linked example to my Django 1.6 branch commit, but it is not the correct approach. Django needs more of https://github.com/django/django/commit/c89d80e2cc9bf1f401aa3af4047bdc6f3dc5bfa4
 
I suggest, instead, that we solve the problem for real (so even Firebird can
enjoy it): Remove all vendor checks except for positive checking.

A positive vendor check should be shunned like global variables and used only when there is no other choice.
 
Replace conditional code with either conditions on feature flags, or refactoring into
separate test functions;

A conditional feature check is better for test speed and code maintenance.
 
and replace negative checks by an API allowing the
backends to filter the tests. So, instead of a test saying "I'm not running on
Oracle", let Oracle say "I'm not running these tests".

It would be better if a database backend could provide extra tests that it would expose to Django that can be run with the rest of the test suite. If the backend is used, then the test runner would find its tests and run them if specified, or part of the "run everything" default. This would make the test suite cleaner in general and also solve the awkwardness I face with trying to run my positive vendor check tests.

Regards,
Michael Manfre


Aymeric Augustin

unread,
May 7, 2014, 4:20:45 PM5/7/14
to django-d...@googlegroups.com
I’ve created a pull request: https://github.com/django/django/pull/2640. It removes almost all non-positive vendor checks.

I didn’t deal with all introspection tests because there’s too much variability between backends to express with feature flags. I’m wondering if the solution is to create a copy of the tests for each backend — that would be a positive vendor check — and to adjust them every time. This would cause some duplication, but that isn’t a big deal in test code.

Backend-specific tests are an interesting idea but I’d like to keep them out of the scope of this discussion, if possible ;-)

--
Aymeric.

Aymeric Augustin

unread,
May 7, 2014, 5:00:20 PM5/7/14
to django-d...@googlegroups.com
Finally I bit the bullet and added half a dozen flags just for introspection capabilities. The pull request is ready for review.

I don’t think remaining vendor checks will hurt third-party backends. We can continue this effort if they do.

--
Aymeric.

Russell Keith-Magee

unread,
May 7, 2014, 9:03:11 PM5/7/14
to Django Developers
On Thu, May 8, 2014 at 12:40 AM, Aymeric Augustin <aymeric....@polytechnique.org> wrote:
Hello,

I'm trying to make the lives of maintainers of third-party database backends a bit easier.
 
+1. Very much in support of this as a general principle.
 
I'm specifically interested in MSSQL backends. Unlike Oracle, it isn't supported by Django out of the box, but it's a very common database. The most robust implementation, django-mssql, is very close to passing Django's entire test suite in addition to its own test suite.

Currrently Django uses two solutions for database-dependant tests, both of which would require small adjustments to support third-party backends adequately.

1) Checking database features

This is the correct solution for database-dependant tests in general.

What do you think of adding feature flags that have the same value for all core backends but may be set to a different value by third-party backends?

With a comment explaining which backends use the flag and what it does, I find it acceptable. Then we would skip some tests based on the flag.

From a purely technical perspective, this is the right approach AFAICT.
 
The catch is at a project management level. By doing this, we're introducing code branches, but our own testing won't be able able to execute the "other" branch, so our release testing coverage will be deficient. From a testing perspective, this makes me a little nervous, as we're essentially relying on third party backend providers to (a) tell us which flags are needed (b) when/where then need to be applied, and (c ) to respond in a timely fashion during the release process to make sure the flags are all up to date.

2) Checking connection.vendor

This is often inferior to feature flags.

Positive checking ("run this test only for SQLite") is reasonable. At worst, the test is skipped on third-party databases where it could pass.

Negative checking ("run this test on all databases except Oracle") is more problematic. If a third-party backend also fails the test, there's no easy way to ignore it.

Conditional code ("do X for vendors A, B and C, do Y for other vendors") is also problematic. All third-party backends will do Y, which may or may not be the right behaviour.

Would you object to adding "microsoft" to explicit checks for connection.vendor where appropriate?

I'm less convinced this is a good idea. Vendor flags were introduced as part of the introduction of unittest2 to Django's core. In retrospect, the role being played by 'vendor' should probably implemented as a specific feature. There might be a light documentation benefit from having backend.vendor, but I'm not convinced we should be leaning on it heavily for testing purposes, and I'm definitely not convinced adding references to vendors that aren't part of Django's official codebase is a good idea. 

The good news here is that your PR #2640 seems to bear this out.

While we're on the subject of providing better support to third party backends - I've floated this idea in the past, but now seems an opportune time to discuss again: I'd like to be able to expand our testing infrastructure to include automated testing of projects that are significant to the community. Database backends are particularly relevant here, but things like DDT, Django-registration, and django-rest-framework probably also fall under this umbrella. The idea here would be:

 1) to give us (as the core team) better visibility when we make a change that is going to have an big impact on the community
 2) to ensure that the community knows what libraries in the ecosystem have been updated when we release a new Django version
 3) to provide some guidance to the community that these are the packages you should be paying attention to - the "first amongst equals" of the broader third party app ecosystem.

I know this isn't a small project. However, we have a good opportunity to get the ball rolling next week :-)
 
Yours,
Russ Magee %-)

Rahul

unread,
May 11, 2014, 3:07:56 PM5/11/14
to django-d...@googlegroups.com
Hi,
 I am maintaining DB2 backend and facing same problem, there is need to change as much as we can change vendor depended check to features depended test.

some of test cases written with keeping some features of backend in mind but they didn't have putted the feature check there.

Following test cases requires to check can_defer_constraint_checks since these test cases wants to defer constraints check but database like DB2 doesn't allow to defer within a transaction also.
i) test cases under admin_views.tests.AdminViewDeletedObjectsTest test class
ii) test cases under serializers_regress.tests.SerializerTests test class

Following test cases requires supports_forward_references flag check
i) serializers.SerializersTransactionTestBase.test_forward_refs
ii) fixtures_regress.TestFixtures.test_loaddata_works_when_fixture_has_forward_refs
iii) fixtures_regress.TestFixtures.test_loaddata_forward_refs_split_fixtures

Thanks,
Rahul

Marc Tamlyn

unread,
May 11, 2014, 4:19:27 PM5/11/14
to django-d...@googlegroups.com
Hi Rahul,

Can you open a ticket for these specific issues. Thanks a lot for the feedback!

Marc


--
You received this message because you are subscribed to the Google Groups "Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/6b26dffd-d666-468d-b83b-b30f4c93eb38%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Rahul

unread,
May 19, 2014, 6:52:03 AM5/19/14
to django-d...@googlegroups.com

I have created a ticket #22653 ( https://code.djangoproject.com/ticket/22653 ), for the issue I have mentioned.

Maximiliano Robaina

unread,
May 19, 2014, 10:31:00 AM5/19/14
to django-d...@googlegroups.com
I agree, I think this is much more cleaner approach.

Also, there are some specific features (at less in firebird), which cause test fail, for instance, test/regressiontests/model_fields/models.py, the BigD model max_digits 38 and decimal_places = 30 are not supported by firebird, 
So, how can I skip this test?  Would have to ask them to add one new database feature flag? I don't think that this be the correct approach.

Maximiliano Robaina

unread,
May 19, 2014, 10:32:32 AM5/19/14
to django-d...@googlegroups.com
+1
 

Rahul

unread,
May 19, 2014, 4:27:52 PM5/19/14
to django-d...@googlegroups.com
DECIMAL datatype of DB2 also doesn't supports max_digits 38 and decimal_places = 30. In DB2 precision range defined between 1 to 31 and In firebird range is 1 to 18(ref http://www.promotic.eu/en/pmdoc/Subsystems/Db/FireBird/DataTypes.htm)

It would be good if max_digits and decimal_places can be changed in the range which is valid for Django's 3rd party backends also.
Reply all
Reply to author
Forward
0 new messages