DBFNTX - how to improve speed in SKIP by index file ?

740 views
Skip to first unread message

Sergy

unread,
May 15, 2017, 10:04:48 AM5/15/17
to Harbour Users
Hello friends


There is a very complex operation, while my app skips by active index:

Direct file IO access (also with HBNETIO used as FILE IO redirector) needs 
much more operations, i.e. for SKIP with active index it needs: 
   LOCK INDEX FILE 
   READ INDEX HEADER WITH UPDATE COUNTER TO CHECK IF IT WAS CHANGED 
   IF INDEX WAS CHANGED SEEK CURRENT RECORD 
 * TAKE NEXT/PREV RECORD NUMBER FROM INDEX PAGE 
   READ RECORD BODY 
   IF RECORD IS NOT VISIBLE (DELETED OR FILTERED) GOTO (*) 
   UNLOCK INDEX FILE 
Each (*) operation may cause additional IO request. 
SEEK operation has in practice comparable cost to SKIP. 
 
Is there the way to ignore this and disable "visibility" and other checkings for "current-and-next" records to improve performance ?

In few words: I have a database with sales with ~4 400 000 records, from past 5 years. Each record contains {sale_date, product_code, number, price, etc..}

If I need to select "last month" sales - SEEK (BOM(prev_month) then SKIP UNTIL EOM(prev_month) may take... ~1 minute in my 1Gb LAN.

If I need to select 2..3..6 months - I prefer COPY FILE (server_dir+"\sales.dbf") TO (local_dir+"\sales.dbf") instead.
Copy will take ~5 seconds and read over >4 000 000 records with no index will take less 1 minute time.

Why "index access" is so slow and how to do it faster ?

May be there is a "magic" option or some function call. The data is "stable" (almost "archived") and can not be changed while I make selection...
Maybe there is a list of "record numbers" for my select ?

Thanks.

Daniele Campagna

unread,
May 15, 2017, 11:33:11 AM5/15/17
to harbou...@googlegroups.com

Well, what about to use dtos(sale_date)? Dates will have the format "yyyymmdd" (char), so a simple seek "201703" will find the first record for March 2017 sales, then read and skip until dates match.

Skip with indexes opened is far slower than without them. If you are sure that sales are added so that the older are fist, you can :

seek the first, seek the last (Someone ages ago wrote a seekbottom() function),

close indexes, go to first, do while recno()<=last, skip...

Dan

--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sergy

unread,
May 15, 2017, 3:19:25 PM5/15/17
to Harbour Users, cyber...@tiscalinet.it
Hi Dan

Good idea !

I'm not sure now that records go from older to newer, but I can SORT my table every night - before rebuild-indexes process.
Need to do some test and measure.

Thank you.

понедельник, 15 мая 2017 г., 18:33:11 UTC+3 пользователь Daniele Campagna написал:

Francesco Perillo

unread,
May 15, 2017, 3:31:50 PM5/15/17
to harbou...@googlegroups.com
If you sort, no need to create an index....


Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-users+unsubscribe@googlegroups.com.

Sergy

unread,
May 15, 2017, 3:40:48 PM5/15/17
to Harbour Users
When 95% of my users are working with shared sales table, 5% would like to get some reports. As usual, those reports may contain (or not) current date, which has many changes every second...


понедельник, 15 мая 2017 г., 22:31:50 UTC+3 пользователь fperillo написал:

Ash

unread,
May 15, 2017, 4:47:00 PM5/15/17
to Harbour Users
Hello Sergy,

Create an index on dtos(sale_date).

SET SCOPE TO '20170101', '20170131' //for one month in this example
GOTO TOP
DO WHILE .NOT. Eof()
  // Process record
  SKIP
ENDDO

I use this method in my applications for queries and reports. There are no speed issues with large tables.

Regards.
Ash

Francesco Perillo

unread,
May 15, 2017, 5:21:48 PM5/15/17
to harbou...@googlegroups.com
Hi Sergy,
I don't understand what you really need to do, don't know your database, your index and - the most important - the queries you need to run.

When using "normal" dbf files on a shared directory you are using SMB protocol, and I'm quite sure you have disabled oplocks. Without oplocks the client needs to ask one record at time, one after the other, also taking care of possible changes in the index. This means that if you do a full table scan, you need to transfer 4400k records over the lan.

If your query filters returns only 100 records, you still have to transfer 4400k records.

If you use index to filter, you can transfer a lot less records, it depends on how strict the filter can be.

Using NETIO you don't change any of these, you only use a slightly quicker, more efficient, protocol over the lan, instead of SMB... This is good but it is just a little quicker.


NETIO has a feature that is a real plus: RPC calls. From your client you can executed some code directly on the NETIO server. This means that the database is read as local, shared but local, and execution time can be really quick.

LETODB is a step further: the clients sends, for example, the filter conditions to the server that sends only the needed records. For the above example, LETODB can return over the wire only 100 records, discarding all the other 3300k on the server.
If I remember correctly it can also run commands like SUM onto the server, returning just the totals... fast....
I don't know if LETODB also supports RPC-like stuff.


What I'd do: use NETIO RPC, if I can implement NETIO. This would also remove the need to access dbf files via share...
Using the standard SMB, if the server is Windows or Linux, I'd write a "reporting-app" where I'd move all the reports. Clients will ask for a report to this application running on the server and accessing dbf in local disks. You can provide HTTP API calls, or implement a dbf based queue. You can reply to the HTTP API with json results, or create a report in PDF and replying with the filename, or send via email...

In a specific case I nigthly create a precalculated table with the sum of monthly results... One record contains the sum of dozens of single records that I may recover later drilling down the report, if needed.....

I'd suggest two other options:
1) mysql/postgresql/oracle...: nigthly sync the databases and let the sql planner calculate the best way to calculate your results... just build the sql query per your user request and let them work...

2) I lately used ELK stack. Elasticsearch is a Lucene based database, full text, usually to store millions of log lines.... daily.... ingestion is done by Logstash (or directly via json messages) and Kibana is the web based GUI interface, complete with drillable graphics. Imagine a language to specify a filter, and a tool to specify what to do with the filtered data: a graph, a table, just the result... You can also build realtime dashboard... You have a date, you have quantity instead of bytes, price instead of number of files... the database is nosql style, you don't have rigid database structure with fixed fields....

Francesco


--

Przemyslaw Czerpak

unread,
May 15, 2017, 5:51:12 PM5/15/17
to harbou...@googlegroups.com
Hi,

You need RDD with bitmap support.
Using DBRMAP with index on DATE field all what is necessary
is setting filter and switching to natural table order:

request RMDBFCDX
[...]
rddSetDefault( "RMDBFCDX" )
[...]

// filter on indexed fields are automatically optimized
// in RMDBF* RDDs
SET FILTER TO DATE >= STOD( "20170401" ) .AND. ;
DATE <= STOD( "20170430" )

// natural table order, skip does not touch index
ordSetFocus( 0 )
dbGoTop()
WHILE ! eof()
[...]
dbSkip()
ENDDO


In Harbour repository there is contrib RDDBM library which
also supports bitmap filters but they have to be set manually
(there is no automatic filter optimization), the method of
manual setting is rather inefficient and later the skip in
natural order is not as fast as it can be, anyhow you can
try to make:

request BMDBFCDX
[...]
rddSetDefault( "BMDBFCDX" )
[...]

// choice index on DATE field
ordSetFocus( 1 )
// create array for filtered records
aBM := {}
dbSeek( dStart )
dbOrderInfo( DBOI_SKIPEVAL,,, {| key, rec |
IF key > dStop
RETURN .t.
ENDIF
aadd( aBM, rec )
RETURN .f.
} )
bm_dbSetFilterArray( aBM )
ordSetFocus( 0 )

dbGoTop()
WHILE ! eof()
[...]
dbSkip()
ENDDO


You can also lock index manually and make COPY TO
for given range:

// choice index on DATE field
ordSetFocus( 1 )
// lock the index
dbOrderInfo( DBOI_READLOCK, .T. )
// now index is locked, no one else can upodate it
dbSeek( dStart )
// copy records
COPY TO (cDest) WHILE DATA <= dStop
// unlock the index
dbOrderInfo( DBOI_READLOCK, .F. )

the COPY TO will be much faster because index is locked for the
whole time so for internal SKIP inside COPY TO operation instead of:
LOCK INDEX FILE
READ INDEX HEADER WITH UPDATE COUNTER TO CHECK IF IT WAS CHANGED
IF INDEX WAS CHANGED SEEK CURRENT RECORD
* TAKE NEXT/PREV RECORD NUMBER FROM INDEX PAGE
READ RECORD BODY
IF RECORD IS NOT VISIBLE (DELETED OR FILTERED) GOTO (*)
UNLOCK INDEX FILE
you will have:
* TAKE NEXT/PREV RECORD NUMBER FROM INDEX PAGE
READ RECORD BODY
IF RECORD IS NOT VISIBLE (DELETED OR FILTERED) GOTO (*)
It's much faster but it blocks other users which tries to update
index.

best regards,
Przemek
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Harbour Users" group.
> Unsubscribe: harbour-user...@googlegroups.com
> Web: http://groups.google.com/group/harbour-users
>
> ---
> You received this message because you are subscribed to the Google Groups "Harbour Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.

Francesco Perillo

unread,
May 16, 2017, 6:04:21 AM5/16/17
to harbou...@googlegroups.com
Hi Przemek,
thank you for your reply.


On Mon, May 15, 2017 at 11:51 PM, Przemyslaw Czerpak <dru...@poczta.onet.pl> wrote:

   request RMDBFCDX

Is this RDD public?

How does it work internally? Is the RDD that creates an array of records to return to the client? Something like BMDBFCDX but done internally... ?



   request BMDBFCDX

I do something similar for a browse in hbQt: I run a new thread that scans the file and populate an array with recno()... it is the model in Qt... I may try BMDBFCDX...

Sergy

unread,
May 16, 2017, 4:58:36 PM5/16/17
to Harbour Users
Thank you Francesco for good suggestions.

Can I use NETIO/LetoDBf together (simulteneously) with "normal" DBFNTX driver ?
Because my *.prg sources now have weight about 3.5Mb - it's too difficult to move all to new RDD.

Thank you again.

вторник, 16 мая 2017 г., 0:21:48 UTC+3 пользователь fperillo написал:

Sergy

unread,
May 16, 2017, 5:03:03 PM5/16/17
to Harbour Users
Hi Przemek

Your suggestion is more wonder for me than Francesco's NETIO/LetoDBf... ))

I will do some deep investigation in those RDD, but the first question - can I use "bitmap" indexes together with DBFNTX ?

Thank you for great support for me and this communuty.

вторник, 16 мая 2017 г., 0:51:12 UTC+3 пользователь druzus написал:

ZeTo Fernandes

unread,
May 16, 2017, 6:50:26 PM5/16/17
to Harbour Users
Hi, Sergy
Did you consider to use a copy of the .dbf file as an archive of all the previous months, with the data sorted?
Zeto

Sergy

unread,
May 17, 2017, 3:52:45 PM5/17/17
to Harbour Users
Hi, Zeto

I would like to go further: to use local copy of sales table, duvided by years, as example: sales2014.dbf, sales2015.dbf, sales2016.dbf etc...
The data is "stable", and I can improve productivity: If user wants to create report for the one year (as ex, from 01-05-2016 till 30-04-2017) - My app should read only sales2016.dbf and sales2017.dbf.

BUT: this changes need to re-create data-read logic in all my sales reports...

And I should to choice the apropriate way to do this:

1) local (sorted, divided) data
2) bitmap RDD
3) the lock of active index
4) NETIO/Leto/SQL
5) ELK stack...

WBR
Sergy

среда, 17 мая 2017 г., 1:50:26 UTC+3 пользователь ZeTo Fernandes написал:

Przemyslaw Czerpak

unread,
May 17, 2017, 4:09:17 PM5/17/17
to harbou...@googlegroups.com
On Tue, 16 May 2017, Francesco Perillo wrote:

Hi Francesco,

> > request RMDBFCDX
> Is this RDD public?

No it isn't.

> How does it work internally? Is the RDD that creates an array of records to
> return to the client? Something like BMDBFCDX but done internally... ?

Both RDDs use bitmap (one bit per record) filters to set visible
records but BMDBF* RDDs is very basic implementation which supports
only Harbour arrays in operations on low level bit map record set.
It also does not optimize many operations which can be quite well
optimized with bit map filters so speed improvement is only partial.

> > request BMDBFCDX
> I do something similar for a browse in hbQt: I run a new thread that scans
> the file and populate an array with recno()... it is the model in Qt... I
> may try BMDBFCDX...

The cost of setting bitmap filter in BMDBF* RDDs is much bigger then
it should be anyhow when filter is already set it should greatly speed
up browsing records regardless of used user interface.

best regards,
Przemek

Przemyslaw Czerpak

unread,
May 17, 2017, 4:14:31 PM5/17/17
to harbou...@googlegroups.com
On Tue, 16 May 2017, Sergy wrote:

Hi Sergy,

> Your suggestion is more wonder for me than Francesco's NETIO/LetoDBf... ))

For sure NETIO is the fastest method, i.e. you can create set of
simply function which make COPY TO temporary DBF files on the server
side for given range of data and then open this tables remotely to create
final reports. Easy and fast.

> I will do some deep investigation in those RDD, but the first question -
> can I use "bitmap" indexes together with DBFNTX ?

Yes. They are descendant RDDs of DBF* RDDs so all low level index data
is left untouched in bitmap RDDs.

best regards,
Przemek

elch

unread,
May 17, 2017, 5:00:08 PM5/17/17
to Harbour Users
Hi,

 
> Your suggestion is more wonder for me than Francesco's NETIO/LetoDBf... ))
For sure NETIO is the fastest method, i.e. you can create set of

LetoDBf processes (SKIPs) roughly through 80K-100K records using an indexed table in a *single second*, which can be tuned up 3-5 times.

In comparison NetIO feels like a lame duck, tenths ! of times slower as both technics have to deal with limits in 'package based' networks ...

These not mentioned network limits may have a 'homeopathic' increase of performance by using bitmap index with NetIO ...


IMO

Rolf

Francesco Perillo

unread,
May 17, 2017, 5:06:22 PM5/17/17
to harbou...@googlegroups.com
Rolf,
Prezmek suggested to create function server side, aka RPC calls, to create a copy of the database and then open the copy. In this way he can keep his reports on the client and just change the filter code.



--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.

Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-users+unsubscribe@googlegroups.com.

Przemyslaw Czerpak

unread,
May 17, 2017, 5:14:50 PM5/17/17
to 'elch' via Harbour Users
On Wed, 17 May 2017, 'elch' via Harbour Users wrote:

Hi Rolf,

> > > Your suggestion is more wonder for me than Francesco's NETIO/LetoDBf...
> > ))
> > For sure NETIO is the fastest method, i.e. you can create set of
> LetoDBf processes (SKIPs) roughly through 80K-100K records using an indexed
> table in a *single second*, which can be tuned up 3-5 times.
> In comparison NetIO feels like a lame duck, tenths ! of times slower as
> both technics have to deal with limits in 'package based' networks ...

Read again my message. I was talking about processing operation on
the server side what with NETIO can be done only using RPC which is
the fastest method. Also much faster then using LETODB.

> These not mentioned network limits may have a 'homeopathic' increase of
> performance by using bitmap index with NetIO ...

You are talking about using NETIO as file redirector what as Francesco
said in one of previous messages gives only small improvement in
comparison to standard file server network redirectors and for sure
is much slower then LETODB.

best regards,
Przemek

elch

unread,
May 17, 2017, 6:12:49 PM5/17/17
to Harbour Users
Hi Przemek,


> > > Your suggestion is more wonder for me than Francesco's NETIO/LetoDBf...
> > ))
> > For sure NETIO is the fastest method, i.e. you can create set of
> LetoDBf processes (SKIPs) roughly through 80K-100K records using an indexed
> table in a *single second*, which can be tuned up 3-5 times.
> In comparison NetIO feels like a lame duck, tenths ! of times slower as
> both technics have to deal with limits in 'package based' networks ...

Read again my message. I was talking about processing operation on
the server side what with NETIO can be done only using RPC which is
the fastest method. Also much faster then using LETODB.

sigh,

even old original LetoDb can call any possible Harbour function at 'server side'.

[Franceso: but no <command> like "SUM" pre-processed by std.ch :-) ]


It can also call 'UDF' [ User Defined Functions] compiled as HRB,

loadable at server start or even when the server is up.


LetoDBf fork will not block other connections during such a RPC/ UDF is executed,

that is somehow un-noticed 'new'.

It also can start such an UDF function 'connection independent',

think of a 'background task' running at server side: started by a user who afterwards log out and went home.


And *maybe* my point is missed:

without any changes, LetoDBf outperforms NetIO multiple times.

RPC is not the solution at all, what is about updating new data to server ?


IMO

Rolf


[ nice place to dedicate very honours to Alexander and Pavel for their work, without would be no LetoDBf ]

ZeTo Fernandes

unread,
May 17, 2017, 6:49:26 PM5/17/17
to Harbour Users
Hi, Sergy
As far as I understand your question, I would not divide ( sales2014.dbf, sales2015.dbf, sales2016.dbf, etc, ...)
If that data is stable (and a diferent file from 'online' operations), you can think the file as 'readonly' for report only. You can update that file on end-of-day basis....
and you do not need to re-create the data-read logic.
zeto

Daniele Campagna

unread,
May 18, 2017, 3:29:46 AM5/18/17
to harbou...@googlegroups.com

Sergy, another solution could be using scoped indexes. I have a database where some data are "active" and have a flag "A", while others are "historical" and have a flag "B". Users work the entire month on "active" data, then close the period (records are marked as "B") and load a new batch of records (marked "A"). Using a filter to display/browse only the "A" records is too slow, so I have in place a simple:

index on <date field> for <flagname>="A" to <indexname>

Now using the index users see only the "A" records with no speed degrade.

You could let the dbf unchanged and only create scoped indexes (index01, index02...) with month number.
Of course this approach is fine if users need a single month report, if they want f.e. january AND February you either must create a new temporary index on the fly or extract data to a temp file (first January, then February....) and this complicates things...

Besides, why to use ntx indexes? Switch to cdx.

Dan
--

Sergy

unread,
May 21, 2017, 6:53:25 AM5/21/17
to Harbour Users
Hi Przemek

For sure NETIO is the fastest method, i.e. you can create set of
simply function which make COPY TO temporary DBF files on the server
side for given range of data and then open this tables remotely to create
final reports. Easy and fast.

Good idea. I'm thinking in this way. Because can not revert all my code in one day for use NETIO/LetoDBf...
 
> I will do some deep investigation in those RDD, but the first question -
> can I use "bitmap" indexes together with DBFNTX ?

Yes. They are descendant RDDs of DBF* RDDs so all low level index data
is left untouched in bitmap RDDs.

Sorry, I'm did not understand, how to do this. Now it works this manner: USE sales INDEX sales,sales2 NEW
0) DBFNTX is the default RDD. 
1) sales.ntx - primary index by goods id - when I need to seek all sales of one item.
2) sales2.ntx - secndary index by date - when I need to seek all sales by date

Sorry, may be it's very stupid question, but how I can include and "live-update" 3rd index, which uses another RDD ?
Never faced with this before.

Thank you for support.

--
Sergy

Sergy

unread,
May 21, 2017, 6:55:44 AM5/21/17
to Harbour Users
Hi Rolf

LetoDBf based on NETIO, but why it is so faster ?
WBR, Sergy.

Sergy

unread,
May 21, 2017, 7:07:10 AM5/21/17
to Harbour Users, cyber...@tiscalinet.it
Hi Dan

I'm thinking now about re-structure my data and query-for-reporting. I think that one (big) table and many indexes isn't a good idea. Because I need to implement a) marker for the record (blocked/free) and b) some logic to "protect" this records in sales opertions. Because some "supervisors" can access to entire data to make some corrections in the orders. If I won't do it - "monthly" indexes will be corrupted.

About NTX/CDX - as I understood ours 'guru' - there are no differences between. NTX occupies more space on disk, but doesn't needs some RAM and CPU ticks to "unpack" packed data in index leafs.

WBR.
--
Sergy

elch

unread,
May 24, 2017, 3:18:41 PM5/24/17
to Harbour Users
Hi Sergy,


Hi Rolf
LetoDBf based on NETIO, but why it is so faster ?

this is your third misperception: it is *not* based on HbNetIO.

It is Russian! based, experienced to work in even bad conditions technics,

just extended by a minor bit German precision :-)


a:)

a RDD ( Replaceable Database Driver ) is a technic invented by Cl*pper some decades ago.

This means you can switch from this to that with 'very few' changes.

So the basic change to test LetoDBf means to add a line with Leto_Connect() to the server

-- and to use the letodb.hbc ...

Sure, there are ways to fix you morely at specific LetoDBf -- not needed at first steps.


b:)

it is highly optimized to work around the network limits in a 'package based' network

-- search in this list for a note from me about the 'delayed error':

This may explain why it is so fast *updating* data to the server.

And it perhaps gives you an impression about network limits, rarely mentioned ...


c:)

the secret behind the skipping speed is a well maintained record data read cache,

valid by default for only a single second at client.


d:)

Ron ! benchmarked LetoDBf to be even (many) much faster as the ADS server

-- not only by some %, but much more -- would have not expected such a gap.

This would make LetoDb[f] fastest possible DBF access over TCP/IP network,

even with default settings ...


e:)

there may be still bugs in there, even the reports about get lesser ...


ah, the link:

https://github.com/elchs/LetoDBf


with great honours to the origin -- Aleander, Pavel: all well at you ?

Rolf

Francesco Perillo

unread,
May 24, 2017, 3:28:37 PM5/24/17
to harbou...@googlegroups.com
Can you please expand on point C?

elch

unread,
May 25, 2017, 9:06:37 AM5/25/17
to Harbour Users
Hi, dear Francesco


Can you please expand on point C?
 

c:) read cache

the client (your application) works with a local copy of the record data.

So the next "field" access <right in time> does not lead to a new network

request to server, as you already got the whole record data.

How long this 'cache' is valid, before a new request to server happens, can be

tuned -- by default it is one single second.


If a SKIP is requested, the server responds *not only* with a single record,

but with a *bunch* of records (LETO_SETSKIPBUFFER, default in letodb.ini: 21)

These are the records 'around' the record the client requested to SKIP,

depending on the SKIPing direction before or behind.

Then the next SKIP in application does not lead to a new network request,

but just exchanges the 'record buffer' at client side.


When this buffer, aka 'read cache' must be refreshed, is clearly defined:

# want to skip to a record outside the read cached records

# the timeout for cache is reached ( LetoDBf can set it table specific )

# R-F-lock a record/ file --> request fresh record data from server


Especially the last point is important, as in any network application:

in between the time, the record data is requested

-- but before you lock the record/ file -- data may have changed.

So only after a lock is granted, you are sure to see what is real ;-)

[ think of a SQL result set: working on data meanwhile may have changed ]


---

All is about the limit in amount of requests/ answers,

that can be send to the server in a timespan, as each is a 'whole packet'.

So LetoDB[f] most effort is to optimize the needed amount of packets,

and to send mostly filled packets.

One single packet can be up to ~ 1500 bytes

-- the more bytes less, the more decrease in network performance.


( One question about:

https://groups.google.com/forum/#!topic/harbour-users/jKqFD-UoF4s )


best regards

Rolf

avdesh singh

unread,
May 31, 2017, 8:31:03 AM5/31/17
to Harbour Users
SIR
I AM FOXPRO 2.6 DOS SR.PROGRAMMER I ALSO USE VERY HAVEY YEARWISE DATA AND PICK DATA FOR REPORT THROUGH 

DATE FIELD FORMAT . CHANGE IN DBF FILE MAKE A NEW FILED IN CHARACTER SPAC(10) AND STOR DATE IN YYYYMMDD FORMAT 

IN RECORD REPLACEMENT TIME THEN SORT YOUR RECORD AS NEED SUCH SORTING NEVER FAILED AND PICKUPDATA FAST.

SIR I REQUEST TO YOU HELP IN COMPLING MY PRG FILE TO MAKE EXE.FILE GIVE ME STEP PLEASE. 

THANKS
Reply all
Reply to author
Forward
0 new messages