Ian,
Thank you for the prompt reply!
I am observing the same behavior in the Django shell. Here the actual
query runtime is about the same between Oracle and PostgreSQL back-
ends, but the total turnaround time is about 18 times longer with
Oracle. I believe the following code demonstrates this case:
from django.db import connection
import minilims.log.models as log
import time
time_list = []
for n in range(0, 20):
t1 = time.time()
entries = log.Param.objects.filter(log = 6).order_by('stuff', 'id')
entry = [x for x in entries]
t2 = time.time()
time_list.append(t2 - t1)
print len(connection.queries), 'queries ran.'
average_time = sum(time_list) / len(time_list)
# display minimum, average, and maximum turnaround time
print min(time_list), average_time, max(time_list)
# display average query time
print sum([float(x['time']) for x in connection.queries]) /
len(connection.queries)
The above code in the shell using a PostgreSQL backend reports:
>>> # display minimum, average, and maximum turnaround time
>>> print min(time_list), average_time, max(time_list)
0.203052997589 0.211852610111 0.234575033188
>>>
>>> # display average query time
>>> print sum([float(x['time']) for x in connection.queries]) / len(connection.queries)
0.0557
However, running the same code with an Oracle back-end, after
restarting the shell, results in:
>>> # display minimum, average, and maximum turnaround time
>>> print min(time_list), average_time, max(time_list)
3.59030008316 3.64263659716 4.33223199844
>>>
>>> # display average query time
>>> print sum([float(x['time']) for x in connection.queries]) / len(connection.queries)
0.05825
Any ideas?
Thanks!
DW