Limits of the DBF format

722 views
Skip to first unread message

Yakano

unread,
Jun 5, 2022, 3:52:36 AM6/5/22
to Harbour Users
Hello everyone

Some databases that I use are close to exceeding 1000 fields, dangerously approaching what I thought was the limit (1024).

I have searched for information on the internet, but everything is quite confusing and I finally decided to do my own tests.

With this code, I have managed to create a DBF with 2046 fields, but not 2048... (?)

/**/
Private aDbfStruct:={}
REQUEST DBFCDX //DBFNTX
aAdd(aDbfStruct, {'MEMOFIELD', 'M', 10, 0})
for i:=2 to 2046
   aAdd(aDbfStruct, {'FIELD'+StrZero(i,4), 'N', 13, 2}) // max.2046 ; 2047->error
next
dbCreate("LIMITS.dbf", aDbfStruct)
USE ('LIMITS.dbf') EXCLUSIVE NEW
APPEND BLANK
LIMITS->MemoField := Replicate('.',1024*1024*128)
LIMITS->Field0002 := Len(LIMITS->MemoField)
CLOSE LIMITS
/**/

This number of fields exceeded anything I had read about limits and I decided to test the maximum capacity of an individual MEMO field as well.

I started with 32 Kb, but soon I passed the 1 Mb barrier and stopped at 128 Mb, because those amounts are beyond anything I had imagined.

Again surprise (at least for me), because I was able to create a database that could contain "almost" any file inside a MEMO field.

Maybe the limit of a DBT/FPT file (whatever it is) can be assigned to a single MEMO field?

It's not important but I'm curious... Why???

Can any expert enlighten me???

Thank you very much and Hooray Harbour !!!

Yakano

unread,
Jun 5, 2022, 4:16:02 AM6/5/22
to Harbour Users

KeepKalm ... (and sail away from) ... Harbour
KeepKalmHarbour.png

AL67

unread,
Jun 6, 2022, 4:21:05 AM6/6/22
to Harbour Users
Hi!

In DBF header  length of header structure  is 16 bits number,
and  for all fields  Descriptor is 32 bytes length + rest header 33 bytes

so    (65536-33)/32 = 2046.93   max records: 2046

Adam

Francesco Perillo

unread,
Jun 6, 2022, 10:30:19 AM6/6/22
to harbou...@googlegroups.com
Hi,
I'd really like to know why you need all those fields in a dbf. Unless they are all full of data and needed for a LOCATE/SEKK so it may be acceptable to have such a monster record, in all other cases there are different approaches that can be used.

In one case I created a HASH/array and [de]serialized it into a MEMO field. I didn't need to do searches on those fields.

In other cases, where some fields were filled depending on the record "type", I normalized the data so that values are stored in "specialized" tables.

1000 fields are really.... too much ... :-)))



--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/harbour-users/3c438e1b-e3ab-4e08-b3d2-c22854637ba1n%40googlegroups.com.

Yakano

unread,
Jun 7, 2022, 2:34:15 AM6/7/22
to Harbour Users
Hi Adam
Thanks for the link and for your explanation
Regardas

Yakano

unread,
Jun 7, 2022, 3:05:07 AM6/7/22
to Harbour Users
Hi Francesco

In the past when 16 bits and memory limits were a headhache, I did something like you, but using an array to [input/output] it from a memo field. Now I could keep using this method, but it's easy to control and to read directly a field than a var in memo field, using CLD.

This thousand fields are only a subconjunt of fields needed to process a ***offcial*** doc to pay state one country tribute, the rest of fields are in differents data bases. Now my financial advisor app manage more than 12.000 differents fields and just a few of then are obsolet. What about 12k fields ??? ;-)))

I am also interested in knowing the total limit of an FTP file. I am evaluating a project to integrate in several memo records of an independent DBF, all the files contained in the directory selected by the user (storing the full path, file content, hash code). It may sound crazy, but I have my reasons (very long to explain, but related to VAT).

Regards

Mario H. Sabado

unread,
Jun 7, 2022, 4:57:15 AM6/7/22
to 'elch' via Harbour Users
Hi Yakano,

Here's another reference for the Harbour limits:


Harbour File Size Limits

  • Max record size: 2^16-1 = 65535 byts ( 64 MB )
  • Max number of recors : 2^32-1 = 4,294,967,295 ( 4 Bilion )
  • Max .dbf file size : 2^48 = 256 TB
  • Max DBT memo file size : 2 TB
  • Max FPT memo file size : 256 GB
  • Max SMT memo file size : 128 GB
  • Max NTX file size (standard) : 4GB
  • Max NTX file size (incresead ) : 4TB
  • Max CDX file size : 4GB

Regards,
Mario


Auge & Ohr

unread,
Jun 7, 2022, 6:13:50 AM6/7/22
to Harbour Users
hi,

these Harbour Limit are for 64 Bit OS and is about "File-Size"
we talk about FIELD and how many you can use in a DBF

---

ny Solution is to create another DBF and use SET RELATION
instead of ALIAS i use FIELD->

as you can use 256 RELATION you will get enough FIELDs :)

Jimmy

Mario H. Sabado

unread,
Jun 7, 2022, 6:19:27 AM6/7/22
to 'elch' via Harbour Users
HI Jimmy,

I'm guessing that the maximum fields vary as long as the maximum record size is not exceeded?

Regards,
Mario


Auge & Ohr

unread,
Jun 7, 2022, 6:35:52 AM6/7/22
to Harbour Users
hi Mario,

can you Create 65535 FIELD Type "L" in a DBF ?
look at Answer from AL67 and DBF Structure

Jimmy

Mario H. Sabado

unread,
Jun 7, 2022, 7:48:48 AM6/7/22
to 'elch' via Harbour Users
HI Jimmy,

I can only replicate the limits of 2046 as tested by Yakano.  Beyond that generates a runtime error both for 32bit and 64bit.

Regards,
Mario


Yakano

unread,
Jun 8, 2022, 5:22:20 AM6/8/22
to Harbour Users
Hi Mario & Jimmy

Thanks for the info and tests.
By the moment, 2046 is enought for me.
Set Relation was a solution, I used too.
Max FPT memo file size : 256 GB (is enought, too)

Regards, Yakano

Reply all
Reply to author
Forward
0 new messages