Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ImportFile Error -15 File size exceeds limit

587 views
Skip to first unread message

kilstromcraig

unread,
Apr 15, 2009, 9:44:50 AM4/15/09
to
I'm importing a tab-delimited text file that returns the ImportFile
Error -15 File size exceeds limit.

What's the limit?

Thank you.

John Strano[Sybase]

unread,
Apr 15, 2009, 1:22:20 PM4/15/09
to
From the online documentation...

"...PowerBuilder 10.0 and later versions are Unicode enabled. If your
application uses the ImportFile method to import very large text files
(approximately 839,000 lines) into a DataWindow or DataStore, ImportFile
returns the error code -15. Larger text files could be imported in ANSI
versions of PowerBuilder..."

http://infocenter.sybase.com/help/topic/com.sybase.dc33822_1150/html/pbentrb/BABEBACG.htm?resultof=%22%69%6d%70%6f%72%74%66%69%6c%65%22%20%22%69%6d%70%6f%72%74%66%69%6c%22%20
--
John Strano - Sybase Technology Evangelist

blog: http://powerbuilder.johnstrano.com/


"kilstromcraig" <kilstr...@yahoo.com> wrote in message
news:eceed565-ccbf-4280...@q9g2000yqc.googlegroups.com...

kilstromcraig

unread,
Apr 15, 2009, 1:56:45 PM4/15/09
to
Thank you for pointing me to the doc. I had not found it. However,
I'm not able to import 100,000 lines.

Is the issue the number of rows/lines or the total file size? The
file size appears to be the problem. I've isolated the issue at
approximately 56 Megs.

Any thoughts?

Thanks.

johnh

unread,
Apr 15, 2009, 3:19:48 PM4/15/09
to
Craig, we ran into the same issue some time back. We found
that the issue was total file size rather than the number of
lines (our testing showed that the problem appeared around
500 MB).

We really didn't work around the problem -- we just had our
app check the file size prior to attempting the
ImportFile(), and if it exceeded what we had identified as
the max file size, then we advised the user that the file
could not be imported.

Totally off the top of my head (i.e., no testing or
otherwise deep thinking about the feasibility), I'm
wondering if maybe you could use the file functions to split
an excessively large file into multiple files, and then
import each in succession. You probably would take a hit in
performance, but at least it would work.

Anyway, hope this helps...

kilstromcraig

unread,
Apr 15, 2009, 4:24:33 PM4/15/09
to
Thank you very much for the thoughtful reply. We agree on the
situation, but there's a large difference in the limits we found.
Perhaps there is no single limit, but a combination of factors
depending on the operating system and the machine. I did watch the
paging file and memory usage to make sure I wasn't running out of
system memory. However, the issue is not consistent on different
machines.

I think this is a simple question for which there may be no easy
answer.

In the meantime, I have to make code change that will allow our
customers to save as large a file as possible without having the
application crash.

Is there someone else that can offer further insight?

Jerry Siegel [TeamSybase]

unread,
Apr 15, 2009, 4:15:44 PM4/15/09
to
How about an ODBC text driver as a data source?

--
Report Bugs: http://case-express.sybase.com/cx/welcome.do
Product Enhancement Requests:
http://my.isug.com/cgi-bin/1/c/submit_enhancement

<John H> wrote in message news:49e63354.532...@sybase.com...

kilstromcraig

unread,
Apr 15, 2009, 4:31:54 PM4/15/09
to
With the datastore (created from n_ds) being populated with the
imported file? Are you saying that the problem may lie with the
datasource of the dataobject -- not the file being imported? Thanks.

Jerry Siegel [TeamSybase]

unread,
Apr 15, 2009, 6:34:26 PM4/15/09
to
No, I'm saying that the ODBC driver might be able to deal with a large file,
treating it as a table, where the ImportFile function does not. You would
create a transaction object pointing to the ODBC data source, use SQL in the
DS pointing to the table, SetTransObject and Retrieve().

"kilstromcraig" <kilstr...@yahoo.com> wrote in message
news:4a8e60cd-4aef-4d85...@g20g2000vba.googlegroups.com...

Terry Voth [TeamSybase]

unread,
Apr 15, 2009, 6:53:52 PM4/15/09
to
Depending on whom you want to believe....

One thing that tripped me up once was that the user had embedded line
breaks in the data. The DataWindow thought that was the start of a new
record, so the DW racked up a whole lot more rows than my user thought
he was sending. You might want to write a quick program to ensure
you've got the right number of CR/LFs in your file. (I've got a vague
memory that you have to count stand-alone CRs and LFs, but I'm not
100% sure, and that idea may be PB-version dependent.)

Good luck,

Terry and Sequel the techno-kitten

*********************************
Build your vocabulary while feeding the hungry
http://www.freerice.com
*********************************
Newsgroup User Manual
=====================
TeamSybase <> Sybase employee
Forums = Peer-to-peer
Forums <> Communication with Sybase
IsNull (AnswerTo (Posting)) can return TRUE
Forums.Moderated = TRUE, so behave or be deleted
*********************************

Sequel's Sandbox: http://www.techno-kitten.com
Home of PBL Peeper, a free PowerBuilder Developer's Toolkit.
Version 4.0.4 now available at the Sandbox
PB Futures updated June 25/2008
See the PB Troubleshooting & Migration Guides at the Sandbox
^ ^
o o
=*=

Jeremy Lakeman

unread,
Apr 15, 2009, 8:49:38 PM4/15/09
to

Read the file in smaller chunks yourself and call importstring?

Roland Smith [TeamSybase]

unread,
Apr 16, 2009, 9:05:46 AM4/16/09
to
Try reading the file in linemode and importstring one row at a time.


"kilstromcraig" <kilstr...@yahoo.com> wrote in message

news:4668b42a-0205-405a...@r34g2000vba.googlegroups.com...

kilstromcraig

unread,
Apr 16, 2009, 10:41:02 AM4/16/09
to
Thank you all for your insights.

I'll fix the system fault now by establishing a size limit and then go
back and incorporate your feedback into a more-complete resolution.

I appreciate the help.

Craig

kilstromcraig

unread,
May 28, 2009, 1:13:18 PM5/28/09
to
I took the advice offered earlier and saved to a text file and then
imported a row at a time in LineMode! using the new FileReadEx
function. I couldn't believe it, but reading (and importing) one row
at a time is very quick. Trying to use large blocks or string
concatenation for these large datasets chews up memory and slows it
way down.

I implemented a new output file type (XML), custom-building it to meet
some internal requirements (instead of using SaveAs) and wrote out
(using the new FileWriteEx function) to a unicode file (opened in
TextMode!) and it works well.

Thanks to all of you for your help.

0 new messages