Exporting Graphic Blobs

27 views
Skip to first unread message

Alan D Moore

unread,
Aug 8, 2025, 11:14:14 AMAug 8
to TheDBCommunity
Hello and apologies in advance if this is a bit of a novel.

I have the task of migrating my organization off Paradox, which I've been able to do successfully for some years mostly using pypxlib.

I'm stuck on a table that has a Graphical Blob field in it.  It has 14k records, about 11k of which have a small drawing stored in a Blob field.  I need to export these and tie them back to the records they were stored with in my new database (in postgresql).

I quickly learned that pyxplib can't handle Graphic Blobs in any useful way.  Both it and pxlib seem to be dead development-wise, so that's not going to work.

My next solution was to export to HTML.  This appeared to work, though the blobs were written out to jpg files with arbitrary names.  I wrote a scraper to grab the filename and identifying data from each record to tie the filename back to a record in the new database.

One problem though:  Paradox exports the table incorrectly.  Seemingly at random, the wrong image shows up with the wrong record.  I tried the process using Paradox 11 on Windows 10, Paradox 11 on W10 with Windows 7 compatibility mode, Paradox 11 on Windows XP, and Paradox 9 on Windows XP.  All were wrong in varying degrees.  XP was by far the most wrong, it didn't even export 1/3 of the files.

So I have given up on the HTML export route.

My last hope is ObjectPAL.  I do not grok ObjectPAL, but I gleaned enough of it off some web searches to cobble a script that would scan the table and write out the image to a file with a name containing the fields I'd need to tie it into the new database.  AND IT WORKED.... for a little bit.  At precisely 882 records into the scan, it started putting out empty image files (even though I am only writing if there is image data in the record).

I updated my script to allow me to start at an arbitray record and go for a specified number of records, so I could export in batches.  THIS WORKED.... for a little longer.  I got as far as 2000 records and after that it will only spit out blank files.

Anyone have advice?  Something I can do to help this run reliably?  Any issues with my ObjectPAL code? (see below)

method run(var eventInfo Event)
var
tc tcursor
diagram graphic
gis_no String
skipto number
run_for number
endvar

skipto = 2000
run_for = 500
if tc.open("\\Path\\to\\my.DB") then
scan tc:
if tc.recNo() < skipto then loop endif
if (tc.recNo() - skipto) > run_for then
quitLoop
endif
tc.fieldValue("Gis_no", gis_no)
tc.fieldValue("Diagram", diagram)
if NOT (diagram.isBlank()) then
diagram.writeToFile("\\Path\\to\\output\\" + String(tc.recNo()) + "-" + gis_no + ".jpg")
endif

endscan
tc.close()
endif
endMethod

Steven Green

unread,
Aug 8, 2025, 11:30:56 AMAug 8
to TheDBCommunity
Alan, you did great.. from your description of the errors, my first guess is that the blob file is corrupt, not that your scan is wrong.. rebuilding the table, after you make a backup, might correct some "bad pointer" issues, but with blob and binary data, there is no guarantee

Alan D Moore

unread,
Aug 8, 2025, 11:47:16 AMAug 8
to TheDBCommunity
Thanks, and thanks for the response!

You may be right about the corruption; I have no problems viewing the data in Paradox, and it passes a verify, but when I've tried to rebuild the table before I just end up with an empty table.  Granted, I don't really know what I'm doing with a rebuild, our resident Paradox guru retired a couple years ago.  

Interestingly, I removed the master password from my copy of the table and I seem to be getting further down the road with the script.  Not sure how far I'll get this time.  It almost feels like Pdox is hitting some kind of internal resource limit (the system has more than enough to spare).  Maybe removing the password freed up some overhead?  Or maybe decrypting the table repaired some corruption?  I'm just stabbing in the dark.

Steven Green

unread,
Aug 8, 2025, 11:52:40 AMAug 8
to TheDBCommunity
blob and grahpic fields are subject to corruption that paradox can't fix, and the more they get edited, the more fragile they become.. yes, the repair tool can make it worse, because it has no idea what that content is.. password protected tables just add to the complexity.. you're doing what you can, you are extracting the data that can be extracted

Alan D Moore

unread,
Aug 8, 2025, 1:14:24 PMAug 8
to TheDBCommunity
Made a bit more progress.  I kept trying different things and it would go for a few  hundred more records then die.  I noticed after it would die, Paradox would start to throw DLL errors if I tried to do certain things.

Which led me to believe that it was just corrupting its own memory space.  So in theory, if I just close and reopen paradox between each batch, that *should* allow it to keep running.  So far that theory has gotten me closing on 5000 exported records, so maybe this is the final hurdle?

It's funny, I've been working on replacing pdox for my employer for at least a decade, and this group of tables is the last of the holdouts.  It's like I've reached the game boss -- paradox is not going down without a fight!

Michael Kennedy

unread,
Aug 10, 2025, 7:10:34 PMAug 10
to TheDBCommunity
Alan,

Very, very interesting issue!

Just a small Q: When you access any of the 14k records, individually, is the relevant Blob always correct? If so, perhaps your files are OK?

Next, in your 'skipto' code, I'm guessing that the SCAN function actually scans through an Index, AND the matching Data-records/buffers. What if you did a 'LOCATE' (or similar) function to skip to the record you want to start on? I expect this approach might use only the index initially, and run down through it until the relevant start-point was reached, and then start using the data-records/buffers thereafter.

Might be interesting...

  - Mike

Alan D Moore

unread,
Aug 11, 2025, 1:40:10 PMAug 11
to TheDBCommunity
Thanks for the ideas Michael, but I think I have solved this one.  Yay!

I spent all day friday exporting JPEGs, it seems that Paradox basically corrupts its own memory space after so many exports.  I could close and restart paradox and grab the next set of records just fine.  Close of the day I had exported the whole databse.

This morning I rewrote the scripts to import the files into the new database and started checking my data.  The records were matched correctly (yay!), but 2 of the 50 I checked were exported at tiny resolution and basically corrupted.  I tried to re-export just those files and the result was the same every time.  SO... I'm sunk again.

I then had the thought to try exporting to BMP.  I'd only exported to JPEG because that's what the HTML exporter does so I figured it was the default format.  BMP was the hero though.  It exported like lightning and all the sizes and images were correct.  I re-exported all 14k records in about 10 minutes without having to restart paradox.

I've since checked the first 100 records with no problems found.  Going to bump this back to the stakeholders to do more thorough checking, but I think I may finally have cracked the process here.

Lesson learned:  Paradox's JPEG converter is best not used.

Alan D Moore

unread,
Aug 11, 2025, 1:40:58 PMAug 11
to TheDBCommunity
Also, Michael, to answer your question:  Yes, accessing them manually in Paradox always seems to return them correctly.

On Sunday, August 10, 2025 at 6:10:34 PM UTC-5 Michael Kennedy wrote:

Michael Kennedy

unread,
Aug 12, 2025, 5:14:56 PMAug 12
to TheDBCommunity
That's brilliant, Alan.

It's interesting that all the correct data, and blobs, seem to have been loaded into the data files, and that all indices are likely ok too. Strange that the JPEG handling causes some RAM issues!

Maybe worth checking some data in the middle and end of the exported file, as well as at the beginning, in case some rot sets in somewhere.

Well done again!!
  - Michael
Reply all
Reply to author
Forward
0 new messages