Import .csv file to dbf

561 views
Skip to first unread message

Diego Fazio

unread,
Jul 10, 2019, 5:10:18 PM7/10/19
to Harbour Users
Hi all, I need to import a .csv file(about 250mb) to a dbf.
I tried using fread but It took so long. Is there a better and faster way to do it?

Thanks
Diego

GeoffD

unread,
Jul 10, 2019, 8:35:54 PM7/10/19
to Harbour Users
Hi Diego
May I suggest you look at the APPEND FROM command using DELIMITED [WITH ...]

If you already have a DBF structure then 

USE my.dbf

APPEND FROM records.csv DELIMITED  

One problem may be Date format. Will need to be yyyymmdd

Hope this helps
Regards
Geoff

Paola Bruccoleri

unread,
Jul 10, 2019, 9:11:35 PM7/10/19
to harbou...@googlegroups.com
Hola Diego

Usá el FParseEx. Espero te sirva.

Un ejemplo. Acá es un txt pero un csv es lo mismo

aText:= FParseEx( "archivos\infractores.txt", "," )

FOR EACH aRecord IN aText
if lower(aRecord[1]) <> 'numero' // no es el primer renglon
cPais := aRecord[1]
cIdentif:= aRecord[2]
cNumero := alltrim(PADR(aRecord[3],12))
cNombre := aRecord[4]
dInicio := ctod(aRecord[5])
dFin := ctod(aRecord[6])

if !infract->(DbSeek(cNumero))
infract->(DbAppend())
infract->numero := cNumero
infract->identif:= cIdentif
infract->nombre := cNombre
infract->inicio := dInicio
infract->fin := dFin
else
if infract->(RecLock(0))
infract->inicio:= dInicio
infract->fin := dFin
infract->(DbUnLock())
endif
endif
endif
NEXT



----- Mensaje original -----
De: "Diego Fazio" <diego...@gmail.com>
Para: "Harbour Users" <harbou...@googlegroups.com>
Enviados: Miércoles, 10 de Julio 2019 18:10:18
Asunto: [harbour-users] Import .csv file to dbf
--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/harbour-users/75085d09-38ea-4a6a-b9af-e68e187f2323%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

jgt

unread,
Jul 10, 2019, 10:56:17 PM7/10/19
to Harbour Users
If the date format is "mm/dd/yyyy" you can import it into a 10 character field, and add a date field to the end of each record in the dbf
and after importing the csv file:
   set date american (or whatever matches the input format)
   replace all date_field with ctod(csv_date_field)

Diego Fazio

unread,
Jul 11, 2019, 9:41:25 AM7/11/19
to Harbour Users
Paola, muy bueno el FParseEx. No lo conocia. Lo carga muy rapido a memoria. Me faltaria unicamente(si es que existe) una manera de poder recorrerlo lo mas rapido posible. El tema es que necesito hacer busquedas de datos especificos dentro de este array...

Gracias
Diego.
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbou...@googlegroups.com.

Paola Bruccoleri

unread,
Jul 11, 2019, 9:55:30 AM7/11/19
to harbou...@googlegroups.com
Hola Diego
Si el archivo tiene esto

paola, bruccoleri, 12344555, 18 de julio 2323, 3333333

cada parte te la devuelve en aRecord, por ejemplo aRecord[1] de devuelve paola, etc... eso supongo que te habrás dado cuenta.
Ahora.. para buscar dentro de cada dato, usa las funciones de string que tienes disponibles... no sé cómo poder recorrer más rápido; creo que el fparseex es lo más eficiente que hay.

Bueno, después nos cuentas cómo te fue
byeee


De: "Diego Fazio" <diego...@gmail.com>
Para: "Harbour Users" <harbou...@googlegroups.com>
Enviados: Jueves, 11 de Julio 2019 10:41:24
Asunto: Re: [harbour-users] Import .csv file to dbf

Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/harbour-users/6767eba8-f8d7-44f6-a223-05052f32af93%40googlegroups.com.

Diego

unread,
Jul 16, 2019, 4:39:48 AM7/16/19
to harbou...@googlegroups.com
Thanks Geoff. Is there a way to add a progress state to this command?

Diego
--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to a topic in the Google Groups "Harbour Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/harbour-users/0YMAqDIKzCw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to harbour-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/harbour-users/f519c0d4-cdaf-4c0c-8276-5f3378a85829%40googlegroups.com.

Angel Pais

unread,
Jul 16, 2019, 4:39:49 AM7/16/19
to harbou...@googlegroups.com

GeoffD

unread,
Jul 16, 2019, 11:56:08 PM7/16/19
to Harbour Users
Hi Diego
No I don't know how to show progress.
But I'll see if I can think of some suggestions!

Cheers
Geoff


On Tuesday, 16 July 2019 18:39:48 UTC+10, Diego wrote:
Thanks Geoff. Is there a way to add a progress state to this command?

Diego

El 10 jul. 2019, a la(s) 21:35, GeoffD <geoffda...@gmail.com> escribió:

Hi Diego
May I suggest you look at the APPEND FROM command using DELIMITED [WITH ...]

If you already have a DBF structure then 

USE my.dbf

APPEND FROM records.csv DELIMITED  

One problem may be Date format. Will need to be yyyymmdd

Hope this helps
Regards
Geoff


On Thursday, 11 July 2019 07:10:18 UTC+10, Diego Fazio wrote:
Hi all, I need to import a .csv file(about 250mb) to a dbf.
I tried using fread but It took so long. Is there a better and faster way to do it?

Thanks
Diego

--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.

Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to a topic in the Google Groups "Harbour Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/harbour-users/0YMAqDIKzCw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to harbou...@googlegroups.com.

GeoffD

unread,
Jul 17, 2019, 2:35:58 AM7/17/19
to Harbour Users
Hi Diego
It's me again.
Well I'm getting closer to a solution but the problem remains finding a quick way to find the number of lines (records) in the text file.
Using the 'WHILE' clause of the APPEND FROM command you can do whatever you like provided you return .T. to continue the process so I have been able to create a progress bar by pre-loading it with the maximum number of lines and  it works fine.
The while clause is called for each appended record so you can calculate when to display progress based on whatever formula you want 

BUT I'm stuck on a quick means of opening a text file and counting the number of lines in it!!
Don't know if this helps at all?

Geoff

Klas Engwall

unread,
Jul 17, 2019, 3:56:31 AM7/17/19
to harbou...@googlegroups.com
Hi GeoffD,
nLines := int( nFileSize / ( nPositionOfFirstLinefeed + len( hb_eol() ) ) )

:-)

Regards,
Klas

Serge Girard

unread,
Jul 17, 2019, 7:44:41 AM7/17/19
to Harbour Users
Assuming all records are equal in size !

Serge

Op woensdag 17 juli 2019 09:56:31 UTC+2 schreef Klas Engwall:

Diego Fazio

unread,
Jul 17, 2019, 8:00:48 AM7/17/19
to Harbour Users
In my case, the length of the line is fixed. So I got the number of records.

Thanks
Diego.

Klas Engwall

unread,
Jul 17, 2019, 2:54:01 PM7/17/19
to harbou...@googlegroups.com
Hi Serge,

> Assuming all records are equal in size !

Yes, of course. But for creating a progress bar it would probably be
good enough. And it would probably work better than Microsoft's
(in)famous progress bars :-)

If the record size varies a lot, somewhat better precision could
possibly be obtained by reading, say, the first ten records and
calculating an average. However, if the *exact* number of records is
required, then this wouldn't work at all. It all depends on the
requirements.

Regards,
Klas

GeoffD

unread,
Jul 17, 2019, 9:53:57 PM7/17/19
to Harbour Users
Thanks Klas
That works very well and as you say an average could be calculated
if the record lengths vary.

Thanks again
Geoff

Reply all
Reply to author
Forward
0 new messages