Is there a way to increase performance for filtering the data?
Retrieving less rows is not an option since our clients want to have the
detailed information available.
Machine details: Windows Vista, Quad CPU 2.4Ghz, 3Gb, 32 bits.
Thanks
Eric
How can the user possibly view 1.4M rows at the same time?
I'd suggest you work out exactly what their use cases are and find a
way to pretend that you have all 1.4M rows but only retrieve what the
user is actually looking at for any point in time. Or retrieve the
records as you write them out to a file when requested.
There's a big difference between having the information "available" in
memory and loading it on demand only when you really, really have to.
On the plus side it should be faster to leave all the searching /
sorting / filtering / aggregating work to run in the database.
Thanks for the answer. I think what you are trying to tell me is that
powerbuilder is not the tool to go for in case you want to "play" with
this amount of data.
I really had hoped someone had found a way to increase speed of
filtering if saving the data is not necessary. This would give us great
options.
Thanks
Eric
Part of the problem with filtering is the need to evaluate an
expression on every row.
If you can identify a group of rows you want to exclude in another way
you could call rowsmove to filter a block of records in one hit.
If a complete retrieve takes 2 minutes, it would be faster to simply
re-retrieve the datawindow after modifying the query. I'll agree that it is
unrealistic for any standard query tool to function efficiently with
hundreds of thousands of rows Perhaps your dbms vendor has OLAP tools that
are more suitable for this type of thing.