I am writing an application in C# that I am trying to integrate with a
legacy program that stores its information in Btrieve databases. I am
looking for information on how to do this.
Due to financial constraints, as well as licensing problems,
purchasing a copy of Progressive Server is not very feasible.
Basically, my program needs to be able to read the information stored
in this database without installing Progressive.
I am basically looking for one of two things: an open-source library
for reading btrieve data files, or a description of the file format.
I have been able to find many descriptions of file.ddf and field.ddf,
but the program I am integrating with only seems to use an index.ddf.
Any of the freeware programs I have found require a file.ddf and
field.ddf file to work.
There have been some hints that the SQL reference manual from the
Progressive website might have the file format I'm looking for, but I
have not been able to find any details.
This is all further complicated by the fact that I have no idea what
version of the file format I am working with, but I suspect it to be
pre-6.15.
If anyone could point me towards some solutions to any of these
problems, I would really appreciate it. I have been banging my head
against a wall all week. Thanks.
-Daniel
You were saying progressive but meant Pervasive. You can get there
workgroup edition which is good up to 5 users for around $50.00 per
user. You will have to have these drivers if you want to read your
data. No option of reading without.
You will need the DDF files if you want to use ODBC to access the
data. So you will need to build them. BtSearch at www.nssdd.com can
help you analyze the data structure and build the DDF files. Once you
have the drivers and the DDF files then download the SDK at the
www.pervasive.com site. That should get you rolling.
Gil
Thank you very much for your help. I did indeed mean Pervasive and
not Progressive. I blame it on the hour I was posting at.
I don't necessarily want to use ODBC to access the data, although I
would prefer it. The problem with using a DDF builder is that I need
to read in this legacy data over and over again. The application I am
writing simply integrates with the older program, it doesn't replace
it. So, the option of running another application, especially an
application our end users would have to pay $150 for is not feasible.
If a program such as BtSearch exists, there must be some information
outside of Pervasive on how the file format is structured. That is
what I am looking for. Either that or a free toolkit for dealing with
it. The free part is very important. This is a student project, and
we don't have the resources to buy hundreds of dollars worth of
software or to license the Pervasive library for distribution with our
application.
-Daniel
> have the drivers and the DDF files then download the SDK at thewww.pervasive.comsite. That should get you rolling.
>
> Gil
Gil
To re-iterate what I am looking for:
-A free library for reading btrieve files, or
-A document describing the btrieve file format.
I would even be happy with a free program for converting between
Btrieve and any other database format, but especially MSSQL, Access,
MySQL or PosgreSQL. I find it difficult to believe that there is no
documentation on the Btrieve file format, given it has been around for
20 years.
-Daniel
I've been working with btrieve for 10 years (ie not long by the
standards of some people here!) and I've NEVER come across a free
program that allows you to read btrieve files direct. I know of
no free program that takes the binary btrieve file and provides
access to it via the Btreive API.
> -A document describing the btrieve file format.
There are two formats to consider. The first is how btrieve organises
the internals of a file to manage record storage, index management,
compression etc. This is what the licensed Btrieve software does.
The second is how a particular application choses to store data
in a record.
> I would even be happy with a free program for converting between
> Btrieve and any other database format, but especially MSSQL, Access,
> MySQL or PosgreSQL. I find it difficult to believe that there is no
> documentation on the Btrieve file format, given it has been around for
> 20 years.
As a format, it's always been internal to the btrieve data server.
>
> -Daniel
--
Guy
-- --------------------------------------------------------------------
Guy Dawson I.T. Manager Crossflight Ltd
gn...@crossflight.co.uk
- Database Engine Version: Go to www.goldstarsoftware.com/press.asp
and get the white paper on "Finding Your Database Engine Version".
When you've figured it out, let us know.
- File Formats: Use the BUTIL -STAT call, described in the white paper
on "Btrieve Data File Maintenance". If you do not find the BUTIL
utility, then you may have the Maintenance Utility (a graphical tool)
instead. Either way, do a STAT of one of your data files to determine
your version.
Once we know that, then we can come up with some reasonable (and FREE)
solutions. Keep in mind, though, that anything that does not involve
accessing Btrieve natively will be much slower and require more steps.
If you want to get an idea what I am talking about, see the END of the
white paper on "Accessing Brtieve Data Files from ODBC" at the same
site. This provides a reasonable way to access the data on your
existing Btrieve engine (whichever version it may be) and exporting it
to flat (unformatted) files that can then be read by your own
application.
Goldstar Software Inc.
Pervasive-based Products, Training & Services
Bill Bach
Bill...@goldstarsoftware.com
http://www.goldstarsoftware.com
*** Chicago: Pervasive Service & Support Class - March 2008 ***
danie...@gmail.com wrote:
--
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* DUMPDATA - Jim Kyle - February 1995 *
* *
* Lists all data records from any version of Btrieve file *
* without requiring Btrieve engine to be installed in *
* system. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
#include <stdio.h>
#include <conio.h>
#ifdef __TURBOC__
#include <alloc.h> // Borland-specific
#else
#include <malloc.h> // Microsoft version
#endif
#include <string.h>
#include <ctype.h>
#include "btrieve.h"
FILE *fp; // global for convenience
FCRTOP Fcr;
int PageSize, RecLen, PhyLen;
char *PgmName = "DUMPDATA";
char outbuf[80];
FILE *fpout = stdout;
int rectype = 0; // 0 fixed, 1 variable,
// 2 var trunc, 3 compressed, 4 uses VAT
VRECPTR Vrec; // current vrec pointer
int fragno; // fragment number from Vrec
long fragpg; // vrec logical page number from Vrec
long fragfo; // file offset to physical page
int *fragpp; // fragment index array base
int fragi, // index into fragpp array
frago, // offset to start of fragment
fragl; // length of fragment
BYTE Vbfr[MAXBTRPG];
char wrkbuf[MAXBTRPG]; // for reading anything into
char *fmt[] = {
"original",
"new"
};
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure undoes Btrieve's word-swapping. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
static long swap( long n )
{ asm les dx,n; // ASM trick for simplicity
asm mov ax,es;
// return ((n>>16)&0xFFFF) | (n<<16);
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure dumps data, translating to ASCII *
* if in printable range, else outputting as hex. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
static void dumpdata( BYTE *data, int nbr )
{ int i, j;
char *h = "0123456789ABCDEF";
for(i=0; i<nbr; i+=64)
{ for( j=0; j<64; j++ )
if( (i+j) < nbr )
putchar( h[ (data[i+j]>>4) & 15] );
printf( "\n\t " );
for( j=0; j<64; j++ )
if( (i+j) < nbr )
putchar( h[ data[i+j] & 15] );
printf( "\n\t " );
for( j=0; j<64; j++ )
if( (i+j) < nbr )
putchar( isprint( data[i+j] ) ? data[i+j] : '*' );
printf( "\n\t " );
for( j=0; j<64; j++ )
if( (i+j) < nbr )
putchar( '-' );
if( j == 64 )
printf( "\n\t " );
}
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure converts a Logical Page number to *
* a file offset, using the PAT for new format files. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
static long lp_pp( long lp )
{ long ret, pat1, pat2;
unsigned short u1, u2, pppat;
if( CurFmt != 1 ) // only do lookup for new
{ ret = 2; // first PAT pair on 2, 3
pppat = (PageSize >> 2) - 2; // pages per PAT
while( lp > pppat ) // off current page
{ lp -= pppat; // so tally down index
ret += (PageSize >> 2); // up the PAT pp nbr
}
pat1 = ret * (long)PageSize; // first PAT of pair
pat2 = pat1 + (long)PageSize; // second right after it
fseek( fp, pat1+4L, 0 ); // get both usage counts
fread( &u1, 2, 1, fp );
fseek( fp, pat2+4L, 0 );
fread( &u2, 2, 1, fp );
if( u1 > u2 ) // choose most recent one
ret = pat1;
else
ret = pat2;
ret += (long)(( lp << 2 ) + 4L ); // position in PAT
fseek( fp, ret, 0 );
fread( &lp, 4, 1, fp ); // read it into LP
lp = swap( lp ); // and un-word-swap it
lp &= 0xFFFFFFL;
}
if( lp == 0xFFFFFFL || lp == -1L ) // NULL pointer values
ret = -1L;
else // convert to offset
ret = lp * PageSize;
return ret;
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure writes a message to CRT and then *
* waits for user to press ENTER key. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
static void oflomsg( char * p, int r, long cpos )
{ printf( "\07%s buffer overflow!\07\n"
"Press ENTER to continue\n%d\t ", p, r );
while( getch() != 13 )
/* wait for user */ ;
dumpdata( (BYTE *)wrkbuf, r );
fseek( fp, cpos + (long)PhyLen, 0 );
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure checks one record for validity and *
* then outputs its data, expanding as necessary. *
* Flag byte added as promised in text. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
void DmpRec( void ) // dumps single record
{ long cpos = ftell( fp ); // save position at entry
int i, b, r = RecLen, count, state;
if( rectype != 3 && CurFmt == 2 ) // new format usage test
{ fread( &i, 2, 1, fp );
if( !i ) // ignore unused records
{ fseek( fp, cpos + (long)PhyLen, 0 );
return;
}
fread( wrkbuf, PhyLen-2, 1, fp ); // load workbuf with data
}
else // test for empty record
{ int empty = 1;
fread( wrkbuf, PhyLen, 1, fp ); // load wrkbuf with data
for( i=4; i<PhyLen; i++ ) // skip over pointer
if( wrkbuf[i] ) // something there
{ empty = 0; // so not empty
break;
}
if( empty ) // restore file, ignore
{ fseek( fp, cpos + (long)PhyLen, 0 );
return;
}
}
if( rectype ) // var, trunc, compr
{ long fps = ftell( fp ); // save file position
int lofs;
BYTE cmpalg, savect, countflag;
switch( rectype )
{
case 1: // VARIABLE LENGTH DATA
case 2: // BLANK TRUNCATION
memcpy( &Vrec, wrkbuf + r, 4 );
memcpy( &count, wrkbuf + r + 4, 2 );
fragpp = (int *)Vbfr; // set pointer
nxvr: fragno = VRFrag( Vrec ); // for multiple frags
fragpg = VRPage( Vrec );
fragfo = lp_pp( fragpg );
if( fragfo == -1L || fragno > 254 )
goto vrdun; // no more to do
fseek( fp, fragfo, 0 ); // read page into buffer
fread( &Vbfr, MAXBTRPG, 1, fp );
fragi = ((PageSize - 1) >> 1 ) - fragno;
frago = fragpp[ fragi ] & 0x7FFF;
for(lofs=1; fragpp[fragi - lofs] == -1; lofs++)
/* all done in test! */ ;
fragl = (fragpp[ fragi - lofs ] & 0x7FFF ) - frago;
if( CurFmt == 2 || fragpp[fragi] & 0x8000 )
{ Vrec = *(VRECPTR *)(&Vbfr[ frago ]);
frago += sizeof( VRECPTR );
fragl -= sizeof( VRECPTR );
}
else
{ Vrec.lo = Vrec.mid = Vrec.hi = Vrec.frag = 0x00FF;
}
memcpy( wrkbuf + r, Vbfr + frago, fragl );
r += fragl;
goto nxvr; // check for next frag
vrdun: if( rectype == 2 ) // restore blanks
{ while( count-- && r < 4095 ) // stay in buffer!
wrkbuf[ r++ ] = ' ';
}
break;
case 3: // COMPRESSED DATA
case 5: // COMPRESSED, VARIABLE
memcpy( &Vrec, wrkbuf + r, 4 );
fragpp = (int *)Vbfr; // set pointer
cmpalg = wrkbuf[4];
if( !cmpalg ) // deleted record, ignore
{ fseek( fp, cpos + (long)PhyLen, 0 );
return;
}
count = 0; // clear count vars
savect = 0;
countflag = 0;
r = 0; // expansion index
state = 1; // copy strings first
nxcr: fragno = VRFrag( Vrec ); // for multiple frags
fragpg = VRPage( Vrec );
fragfo = lp_pp( fragpg ); // actual file offset
if( fragfo == -1L || fragno > 254 )
goto crdun; // end of chain
fseek( fp, fragfo, 0 ); // read page into buffer
fread( &Vbfr, PageSize, 1, fp );
fragi = ((PageSize - 1) >> 1 ) - fragno;
frago = fragpp[ fragi ] & 0x7FFF;
fragl = (fragpp[ fragi - 1 ] & 0x7FFF ) - frago;
if( CurFmt == 2 || fragpp[fragi] & 0x8000 )
{ Vrec = *(VRECPTR *)(&Vbfr[ frago ]);
frago += sizeof( VRECPTR );
fragl -= sizeof( VRECPTR );
}
else
{ Vrec.lo = Vrec.mid = Vrec.hi = Vrec.frag = 0x00FF;
}
if( countflag ) // count spans frags
{ frago--; // adj offset, length
fragl++;
Vbfr[frago] = savect; // set first byte in
count = 0; // clear out count
savect = 0; // clear save byte
countflag = 0; // clear flag
}
for( i=0; i<fragl; ) // decompression loop
{ if( count < 1 ) // get new count
{ count = *(int *)(Vbfr + frago + i );
i += 2; // advance pointer
}
if( i >= fragl ) // at fragment end
break; // so get another
if( state ) // process data pair
{ while( count-- ) // copy is state 1
{ wrkbuf[ r++ ] = Vbfr[ frago + (i++) ];
if( r > 4090 ) // error trap...
{ oflomsg( "Copy", r, cpos );
return; // skip this record
}
if( i == fragl && // string spans frags
count ) // and isn't done yet
break; // get next fragment
} // copy loop
if( count < 1 )
state = 0; // repeat next pair
}
else
{ while( count-- ) // repeat is state 0
{ wrkbuf[ r++ ] = Vbfr[ frago + i ];
if( r > 4090 ) // error trap...
{ oflomsg( "Repeat", r, cpos );
return; // skip this record
}
} // repeat loop
i++; // over repeat byte
state = 1; // copy next pair
}
if( i == fragl-1 ) // count spans frags
{ savect = Vbfr[ frago + i++ ];
countflag = 1; // flag byte as saved
break; // get next
}
} // decompression loop
goto nxcr; // to get next fragment
crdun: break; // record complete now
}
fseek( fp, fps, 0 );
}
Fcr.Nrecs--; // tally down count
if( fpout == stdout )
{ printf( "%d,\t ", r ); // human-readable format
dumpdata( (BYTE *)wrkbuf, r );
putchar( '\n' );
}
else
{ fprintf( fpout, "%d,", r ); // BUTIL -SAVE format
for( i=0; i<r; i++ )
{ b = 255 & wrkbuf[i];
fprintf( fpout, "%c", b );
}
fprintf( fpout, "\r\n" );
}
fseek( fp, cpos + (long)PhyLen, 0 ); // restore file position
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure goes through all possible records on *
* a page and calls DmpRec for each. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
void DumpRecs( void )
{ int recpg = (PageSize - 6) / PhyLen; // page capacity
int currec; // current record
for( currec=0; currec < recpg; currec++ )
DmpRec();
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure cycles through all pages, calling *
* DumpRecs routine for each data page in turn. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
void DoItToIt( void )
{ long x, fps = 0L, pgct;
unsigned u;
int lpg=1;
char *rt[] = { "Fixed Length", "Variable Length",
"Variable, truncated", "Compressed",
"Uses VAT, not supported",
"Compressed variable-length" };
RecLen = Fcr.RecLen; // save global values
PhyLen = Fcr.PhyLen;
pgct = swap( Fcr.Npages ); // get number of pages
Fcr.Nrecs = swap( Fcr.Nrecs );
printf( "File %s is in %s format; pagesize = %d, "
"has %ld pages\n",
outbuf,
fmt[CurFmt-1],
PageSize,
pgct );
printf( "Record count = %ld (%s).\n\n",
Fcr.Nrecs, rt[rectype] );
if( Fcr.Nrecs < 1L || rectype == 4 ) // VAT's not supported
return;
printf( "Write SAVE file, or VIEW data on CRT (S or V )? " );
for( u=1; u; )
switch( getch() )
{
case 's':
case 'S':
printf( "\nSave to filename: " );
gets( wrkbuf ); // get filename
for( u=0; wrkbuf[u] > 0x1F; u++ ) // find end of name
;
wrkbuf[u] = 0;
if( strlen(wrkbuf) ) // no name means view
fpout = fopen( wrkbuf, "wb" );
case 'v': // fall through
case 'V':
u = 0;
putchar( '\n' );
break;
case 27:
case 3:
return;
}
if( CurFmt > 1 )
lpg = 1; // new starts at one
else
lpg = 0; // old starts at zero
for( ; lpg < (unsigned)pgct; lpg++ ) // do rest of pages
{ fps = lp_pp( lpg ); // convert to file offset
if( fps < 0L )
break; // NULL-pointer, get out
fseek( fp, fps, 0 ); // seek to start of page
fread( &x, 4, 1, fp ); // read header & usage
fread( &u, 2, 1, fp );
if( (u & 0x8000) ) // dump data records
DumpRecs();
}
if( fpout != stdout ) // close data file
{ fprintf( fpout, "%c", 0x1A ); // after adding EOF mark
fclose( fpout );
fpout = stdout;
}
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure determines a file's type, then *
* loads Fcr, and establishes record type. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
int GetFormat( void )
{ int fmt = 0; // 0 not Btrieve
// 1 old, 2 new
int testbuf[5]; // test first 10 bytes
fread( testbuf, 5, 2, fp );
if( testbuf[1] == 0 ) // page sequence must be zero
{ if( testbuf[0] == 0 )
{ fmt = 1; // original format
fseek( fp, 0L, 0 );
}
else if( testbuf[0] == 0x4346 ) // 'FC' signature
{ fmt = 2;
fseek( fp, (long)testbuf[4]+4, 0 ); // check next FCR
fread( &testbuf[0], 2, 1, fp );
if( testbuf[0] > testbuf[2] ) // second is valid
fseek( fp, (long)testbuf[4], 0 );
else // use the first
fseek( fp, 0L, 0 );
}
}
if( fmt ) // load valid FCR
{ fread( &Fcr, 1, sizeof( FCRTOP ), fp );
PageSize = Fcr.PagSize;
if( Fcr.UsrFlgs & 8 &&
( Fcr.VRecsOkay || Fcr.UsrFlgs & 1 ))
rectype = 5; // compressed variable data
else if( fmt == 2 && Fcr.UsrFlgs & 0x0800 )
rectype = 4; // uses VAT's
else if( Fcr.UsrFlgs & 8 )
rectype = 3; // compressed fixed data
else if( Fcr.VRecsOkay || Fcr.UsrFlgs & 1 )
{ if( (BYTE)Fcr.VRecsOkay == 0x00FD || Fcr.UsrFlgs & 2 )
rectype = 2; // var trunc
else
rectype = 1; // variable length
}
else
rectype = 0; // fixed length
}
else
PageSize = 0;
return fmt;
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure processes a single file. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
int Do_File( char * fnm )
{ int ret = 0; // assume success
strupr( fnm );
fp = fopen( fnm, "rb" );
if( fp )
{ strcpy( outbuf, fnm ); // save for reports
CurFmt = GetFormat();
switch( CurFmt )
{
case 0:
printf( "%s is not a Btrieve file.\n", fnm );
ret = 1;
break;
case 1:
case 2:
DoItToIt();
break;
default:
puts( "Undefined format code, should never happen!" );
ret = 99;
break;
}
fclose( fp );
}
else
{ perror( fnm );
ret = 2;
}
return ret;
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure tells how to use the program. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
void Usage( void )
{ printf( "Usage: %s file1 [file2 [...]]\n", PgmName );
printf( "\t where filenames can continue until command line "
"is full\n" );
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure displays a standard banner heading *
* each time the program runs. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
void Banner( void )
{ printf( "\n\t %s - from \"Btrieve Complete\" by Jim Kyle\n",
PgmName );
printf( "\t Copyright 1995 by Jim Kyle - All Rights"
" Reserved\n" );
printf( "\t Use freely, but do not distribute "
"commercially.\n\n" );
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\
* *
* This procedure is program entry point. *
* *
\* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
int main( int argc, char **argv )
{ int retval = 0;
Banner();
if( argc < 2 )
{ Usage();
retval = 255;
}
else
{ int i;
for( i=1; i < argc; i++ )
{ retval = Do_File( argv[i] ); // process each file
if( retval ) // if error, wait
while( getch() != 13 )
/* wait right here for CR */ ;
}
}
return retval;
}
daniel.y...@gmail.com rašė:
I'm posting in this thread because is relatively new and I have the
same requirements of the initial poster.
I'm afraid this is rather a faq subject but I couldnt find the faqs of
the ng with google.
I'll summarize them here:
1) I have a bunch of btrieve files to read
2) My application is in c#. (and I have a limited budget).
3) I have the DDF files that the native application uses to access
data. The implemented solution should be able to read the data without
the legacy application though (i.e. on different computers)
4) the version on my system installed by the application that creates
the data I have to read is: W32MKDE.EXE file is 2.0.430.1 (according
to the documentation corresponds to 6.15.430 patch update released in
3/1997)
5) My application won't have to modify data in production, but read it
at given intervals to perform own stats and visualizations. I should
be able to read info programmatically (i.e. without human
intervention).
6) Optional: In the long term I could also be intrested in writing
data into the files.
I have a couple of questions:
I've tried downloading the agtech.co.jp solution but it wont' work on
my system (vs2008 - i'll also try with vc2005 in the next days) and I
didn't find an example on how to read data without a mkd file. Has
anyone tried the http://www.agtech.co.jp/products/BtrieveClasses/index_e.html
components?
Is there something available to dump data removing pages? (something
like Jim Kyle's DUMPDATA posted in this same tree but that compiles
on win32)
Any help is highly appreciated.
Claudio
> Is there something available to dump data removing pages? (something
> like Jim Kyle's DUMPDATA posted in this same tree but that compiles
> on win32)
Kyle.pas DOES compile under win32 (checked in Delphi2007):
{$IFOPT I+} {$DEFINE IPLUS} {$ENDIF}
{--------------------------------------------------------------------
{ Software Development -- PROGRAM SPECIFICATION }
{-------------------------------------------------------------------}
{ Author : R. Paulavichius }
{ Language : Borland Pascal V7.0 }
{ Logfile : KYLE.PAS }
{ Project : Financial accounting }
{ Date : 16-Aug-96 }
{ Revision : }
{-------------------------------------------------------------------}
{ Origin : Jim Kyle. Btrieve complete. }
{ A Guide for Developers and System Administrators. }
{ Addison-Wesley Publishing Company. 1995. }
{-------------------------------------------------------------------}
unit Kyle;
interface
function GetTotalUniqueKeys( FileName: string; KeyNo: integer ):
longint;
implementation
const
MAX_NO_KEYS = 24; { V 5.10 }
var
errCode: integer;
errStr: string;
const
keyDUPLICATE = $01;
keyMODIFIABLE = $02;
keyBINARY = $04;
keyNULL = $08;
keySEGMENTED = $10;
keyALT_COLL_SEQ = $20;
keyDESCENDING = $40;
keySUPPLEMENTAL = $80;
keyEXTENDED = $100;
keyMANUAL = $200;
type
btrver = byte;
TPGPTR = record
u1 : record
case btrver of
5 : (
v5 : record
hi : word; { ( u1.v5.hi << 16 ) + lo = page number }
end
);
6 : (
v6 : record
hi : byte; { ( u1.v6.hi << 16 ) + lo = page number }
code : byte;
end;
);
end;
lo : word;
end;{ TPGPTR }
PPGPTR = ^TPGPTR;
RECPTR = record
wl : longint;
end;{ RECPTR }
VRECPTR = record { (hi << 8 ) + lo = page }
hi : byte;
lo : byte;
mid : byte;
frag : byte;
end;{ VRECPTR }
FSPSET = record
nxpg : TPGPTR; { next free page }
nxrec : RECPTR; { next free record }
nxvrec : VRECPTR; { next free vrecord }
end;{ FSPSET } { v 6+ only }
{
FCR Header Layout
uses 9 unions (r1 ... r9) for pre/post 6.0 variations
within each, structs v5 and v6 hold differences
}
integer2 = word;
{Key-segment Specification}
TSEGSPEC = packed record
Entry : TPGPTR; { 00 - 0 if continuation segment
(old only), }
{ else root page number }
Total : longint; { 04 - count of unique entries
for }
{ this key (word-swapped) }
Kflags : integer2; { 08 - key definition flags
(will }
{ vary with each segment) }
Siz : integer2; { 0A - total length all
segments }
Klen : integer2; { 0C - full size including dup
ptrs }
PgMax : integer2; { 0E - max number of items per
page }
PgMin : integer2; { 10 - min number of items per
page }
DupOffset : integer2; { 12 - offset to dupes from }
{ start of data record }
{preceding info (except Kflags) used only once per key, not per
segment }
Beg : integer2; { 14 - offset of first byte of
key }
Len : integer2; { 16 - length in bytes, this segment }
r1 : record
case btrver of
5 : (
v5 : record
v5empty : array[ 0..3 ] of char;{ 18 - unused in
older format }
end;{ v5 }
);
6 : (
v6 : record
KeyID : char; { 18 - 6+ only, assigned key ID
value }
ACSPage : array[ 0..2 ] of char;{ 19 - 6.1+ only,
unused previously; ACS ID }
end;{ v6 }
);
end;{ r1 }
ExTyp : byte; { 1C - data type code for
segment }
NulVal : char; { 1D - null value for this
segment }
end;{ SEGSPEC }
PSEGSPEC = ^TSEGSPEC;
seg_spec_array_type = array[ 0 .. MAX_NO_KEYS-1 ] of TSEGSPEC;
seg_spec_array_ref_type = ^seg_spec_array_type;
TFCRTOP = packed record
r1 : record
case btrver of
5 : (
v5 : record
PgSeg : TPGPTR; { 00 - page number }
Usage : integer2; { 04 - match with usage count }
Version : integer2;{ 06 - version code, <0 if
owned }
end;
);
6 : (
v6 : record
RecSig : integer2; { 00 - 'FC' }
SeqNbr : integer2; { 02 - always binary zeroes }
Usage : longint; { 04 - usage count }
end;
);
end;{ r1 }
PagSize : integer2; { 08 - in bytes }
AccelFlags : integer2; { 0A - open, first update,
cleared on close }
NxtReusePag : TPGPTR; { 0C - available deleted page }
NxtReuseRec : RECPTR; { 10 - available deleted record }
Nkeys : integer2; { 14 - number of keys defined }
RecLen : integer2; { 16 - data rec length excl
pointers }
PhyLen : integer2; { 18 - physical rec length incl
ptrs }
Nrecs : longint; { 1A - number of records in file
(word swapped) }
r2 : record
case btrver of
5 : (
v5LastAlloc : TPGPTR;{ 1E - last alloc page }
);
6 : (
v6vac1 : longint; { 1E - unused }
);
end;{ r2 }
Consistent : integer2; { 22 - 0xFFFF = need recovery }
r3 : record
case btrver of
5 : (
v5ExtFile : integer2;{ 24 - number of files }
{ (1 normal, 2 extended) }
);
6 : (
v6vac2 : integer2; { 24 - unused }
);
end;{ r3 }
Npages : longint; { 26 - number of pages in file
(word-swapped) }
r4 : record
case btrver of
5 : (
v5 : record
FreeBytes : integer2;{ 2A - free bytes on last
page }
PrePages : integer2; { 2C - indicates preimage
pages used }
end;
);
6 : (
v6vac3 : integer2; { 2A - unused }
v6vac4 : integer2; { 2C - unused }
);
end;{ r4 }
Owner : array[ 1..9 ] of byte;{ 2E - encoded owner name }
OwnerFlags : byte; { 37 - flags byte }
VRecsOkay : byte; { 38 - FF = ok, FD = trunc
blanks }
r5 : record
case btrver of
5 : (
v5vac1 : array[ 1..3 ] of char;{ 39 - unused }
);
6 : (
v6VFree : array[ 1..3 ] of char;{ 39 - 3-byte PGPTR
to first V page }
{ with free space }
);
end;{ r5 }
ACSName : array[ 1..8 ] of char;{ 3C - 8-byte ACS
identifier }
r6 : record
case btrver of
5 : (
v5 : record
Extended : integer2;{ 44 - 0xFFFF if file
extended, else 0 }
v5vac2 : integer2; { 46 - unused }
end;{ v5 }
);
6 : (
v6vac5 : integer2; { 44 - always 0 for v6.0 }
v6vac6 : integer2; { 46 - unused }
);
end;{ r6 }
PreAlloc : integer2; { 48 - number of pages
preallocated }
{ at create time }
r7 : record
case btrver of
5 : (
v5vac3 : array[ 1..4 ] of integer2;{ 4A - unused }
);
6 : (
v6 : record
Version : integer2;{ 4A - version number, }
{ 3 for 4.0 and before }
PaPage : integer2; { 4C - PAT pair number for }
{ first unalloc phys page }
PaOffset : integer2;{ 4E - offset in PaPage for}
{ first unalloc phys page }
PaLast : integer2; { 50 - last PAT page in page
array }
end;{ v6 }
);
end;{ r7 }
LastOp : integer2; { 52 - last operation performed
including bias }
res1 : integer2; { 54 - reserved for future
development }
r8 : record
case btrver of
5 : (
v5 : record
v5vac4 : array[ 1..4 ] of longint;{ 56 - unused }
ResFCB : array[ 1..3 ] of integer2;{ 66 - used
with reserved extended FCB }
ExtFirst : TPGPTR;{ 6C - first page number of }
{ extended file }
PREPgs : integer2; { 70 - total page count of PRE
file }
end;{ v5 }
);
6 : (
v6 : record
MainBitMap : longint;{ 56 - identifies active FCR
and PATs }
Backup : FSPSET;{ 5A - backup free pool }
v6vac7 : array[ 1..6 ] of integer2;{ 66 -
unused }
end;{ v6 }
);
end;{ r8 }
DupOffset : integer2; { 72 - offset of first dup-key
ptr }
{ from rec start }
NumDupes : char; { 74 - Number of dupe ptrs on
record }
NumUnused : char; { 75 - Number of unused dupe
ptrs }
r9 : record
case btrver of
5 : (
v5 : record
v5vac5 : array[ 1..10 ] of word;{ 76 - unused }
PreFCB : array[ 1..10 ] of char;{ 8A - used with
PRE FCB }
v5vac6 : longint; { 94 - unused }
Path : array[ 1..64 ] of byte; { 98 - EXTEND
filepath, }
{ also holds PRE path }
v5vac7 : array[ 1..46 ] of char;{ D8 - unused }
end;{ v5 }
);
6 : (
v6 : record
KATSize : char;{ 76 - Number of keys }
KATUsed : char;{ 77 - Number of segments }
KATOffset : word;{ 78 - offset from FCR start to
KAT data }
KAT512 : array[ 1..8 ] of word;{ 7A - KAT for 512-
byte page files }
v6vac8 : array[ 1..10 ] of char;{ 8A - unused }
Ridata : VRECPTR;{ 94 - special RI definition
pointer }
Free : array[ 0..4 ] of FSPSET;{ 98 - allows
concurrent operations }
v6vac9 : longint;{ D4 - unused }
DupeRes : char;{ D8 - number of dupe-key ptrs }
{ reserved at create time }
PrivDataSz : char;{ D9 - size of private data
field }
{ in data records }
v6vac10 : array[ 1..44 ] of char;{ DA - unused }
end;{ v6 }
);
end;{ r9 }
UsrFlgs : integer2; { 106 - user-specified flags }
{ (CREATE bitmap) }
VarThresh : integer2; { 108 - variable space
threshold }
ACSpage : TPGPTR; { 10A - allows ACS to be added }
{ after creation of file }
ComprLen : integer2; { 10E - record length if file is
compressed }
SegSpecs : seg_spec_array_type;
end;{ TFCRTOP }
PFCRTOP = ^TFCRTOP;
function LongSwap( n: longint ): longint;
type
y = record
w1: word;
w2: word;
end;
var
m: y absolute n;
o: y;
p: longint absolute o;
begin
o.w1 := m.w2;
o.w2 := m.w1;
LongSwap := p;
end;{ LongSwap }
{$I-}
function BtrieveGetFormat( var f: file; var fmt, PageSize: integer ):
boolean;
label
FAILURE;
var
TestBuf: array[0..4] of integer2;
begin
BtrieveGetFormat := FALSE;
errCode := 0;
errStr := '';
fmt := 0;
BlockRead( f, TestBuf, 5*2 );
errCode := IOResult;
if errCode <> 0 then goto FAILURE;
if TestBuf[1] = 0 then begin{Page sequence must be zero}
if TestBuf[0] = 0 then begin
fmt := 1; {Original format}
Seek( f, 0 );
errCode := IOResult;
if errCode <> 0 then goto FAILURE;
end{ [0]=0 }
else if TestBuf[0] = $4346 then begin{'FC' signature}
fmt := 2;
{Check next FCR}
Seek( f, TestBuf[4]+4 );
errCode := IOResult;
if errCode <> 0 then goto FAILURE;
BlockRead( f, TestBuf, 2 );
errCode := IOResult;
if errCode <> 0 then goto FAILURE;
if TestBuf[0] > TestBuf[2] then
{Second is valid} Seek( f, TestBuf[4] )
else
{Second is valid} Seek( f, TestBuf[4] );
errCode := IOResult;
if errCode <> 0 then goto FAILURE;
end;{ $4346 }
end;{ [1]=0 }
if fmt <> 0 then
PageSize := TestBuf[4]
else
PageSize := 0;
BtrieveGetFormat := TRUE;
FAILURE:
end;{ BtrieveGetFormat }
{$IFDEF IPLUS} {$I+} {$ENDIF}
{$I-}
function GetTotalUniqueKeys( FileName: string; KeyNo: integer ):
longint;
label
FAILURE;
var
f : file;
ValidFCR : PFCRTOP;
CurFmt : integer;
PageSize : integer;
k, sn : integer;
begin
GetTotalUniqueKeys := 0;
ValidFCR := NIL;
errCode := 0;
errStr := '';
Assign( f, FileName );
errCode := IOResult;
if errCode <> 0 then begin
errStr := FileName;
goto FAILURE;
end;
Reset( f, 1 );
errCode := IOResult;
if errCode <> 0 then begin
errStr := FileName;
goto FAILURE;
end;
Seek( f, 0 );
errCode := IOResult;
if errCode <> 0 then begin
errStr := FileName;
goto FAILURE;
end;
if not BtrieveGetFormat( f, CurFmt, PageSize ) then begin
errStr := FileName;
goto FAILURE;
end;
case CurFmt of
1 : ;
2 : ;
else begin
errCode := 030; {Not a btrieve File XXXXXXXX}
errStr := FileName;
goto FAILURE;
end;
end;{ CurFmt }
ValidFCR := GetMemory( PageSize );
try
BlockRead( f, ValidFCR^, PageSize );
errCode := IOResult;
if errCode <> 0 then begin
errStr := FileName;
Close( f );
if IOResult <> 0 then ;
Exit; {finally}
end;
sn := 0;
for k := 0 to (KeyNo-1)-1 do begin
while (ValidFCR^.SegSpecs[ k+sn ].Kflags and keySEGMENTED)
<> $0000 do
Inc( sn );
end;{ k }
GetTotalUniqueKeys := LongSwap( ValidFCR^.SegSpecs[ (KeyNo-1)
+ sn ].Total );
Close( f );
errCode := IOResult;
if errCode <> 0 then begin
errStr := FileName;
end;
finally
if ValidFCR <> NIL then
FreeMem( ValidFCR, PageSize );
end;
FAILURE:
Close( f );
if IOResult <> 0 then ;
end;{ GetTotalUniqueKeys }
{$IFDEF IPLUS} {$I+} {$ENDIF}
end.
Rimvydas
You will then need, though, to change your application from linking
with W3BTRV7.LIB and W3BTRV7.DLL and instead link with WBTRV32.DLL.
The function calls are the same -- it is just a slightly older
interface.
As for price, you can get the SDK for free from Pervasive's web site.
If you want a permanent license of the NEW engine, then it'll cost
under $50, but you don't need it if you retrofit to talk to the Btrieve
6.15 engine instead.
This gives you Btrieve-level access to the files, and you can likely
both read and write. It is fast & flexible, and needs nothing more
than what you have today.
Conversely, you can make it MUCH easier by buying a PSQL workgroup
engine license, copying the data files and DDF's to your local WS, and
then using the newer ODBC drivers to directly extract your data. SQL
is MUCH easier to use, and you can even use other ODBC-compliant
applications to read and write data, including the ETL applications
like Pervasive's Data Integrator or Pentaho (open source). Writing
back will be much more difficult, but if this is not an immediate
requirement, then you'll be OK for a while...
Goldstar Software Inc.
Pervasive-based Products, Training & Services
Bill Bach
Bill...@goldstarsoftware.com
http://www.goldstarsoftware.com
*** Chicago: Pervasive Service & Support Class - March 2008 ***
Gloria wrote:
--