Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Import csv file

62 views
Skip to first unread message

David Short

unread,
Aug 14, 2019, 12:52:20 PM8/14/19
to SuperCard Discussion
I can't see that this topic has been addressed before, but I am having a lot of trouble trying to get a script to work that will import a .csv file (from FileMaker) into a stack. ideally it should be able to go through each line of the file and put each item in the line into a new fld and then create a new card fro the next line. Anyone have a successful script?

Bill Bowling

unread,
Aug 14, 2019, 12:55:58 PM8/14/19
to superca...@googlegroups.com
SOMETHING LIKE THIS?

repeat with x = 1 to the number of items of prgdata
 if bg fld "input" is not empty then     
  new card 
  put item x of prgdata into bg fld "input"
 else
  put item x of prgdata into bg fld "input"
end if

add 1 to thecount
if thecount > 24 then
 put thecount + culmcount into culmcount
 put "Adding Pairings" && culmcount
 put 0 into thecount
 end if
end repeat

On Aug 14, 2019, at 9:52 AM, David Short <davy...@gmail.com> wrote:

I can't see that this topic has been addressed before, but I am having a lot of trouble trying to get a script to work that will import a .csv file (from FileMaker) into a stack. ideally it should be able to go through each line of the file and put each item in the line into a new fld and then create a new card fro the next line. Anyone have a successful script?

--
You received this message because you are subscribed to the Google Groups "SuperCard Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to supercard-tal...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/supercard-talk/0b2e4344-f3c1-407a-919f-ac5b27aa29a2%40googlegroups.com.

Scott

unread,
Aug 14, 2019, 2:08:27 PM8/14/19
to SuperCard Discussion
CSV files can be a bit of a pain... especially if your field data contains commas and quotes. Is there a reason why you can't export the data from FileMaker as a TAB delimited text file?

codegreen

unread,
Aug 14, 2019, 2:56:10 PM8/14/19
to superca...@googlegroups.com
Actually embedded commas alone are not so bad, but CSV containing both commas and quotes can definitely be a bit of a pain.

For example:

on mouseUp

  answer file "Select a file to import:" with extension "csv"

  if it = "" then exit script

  local inPath = it, outFile = outPath(inPath)

  ask file "Save imported data as:" with outFile

  if it = "" then exit script

  put it into outPath

  

  open file inPath

  read from file inPath until eof

  close file inPath

  if it = "" then exit script

  local inData = line 2 to 999999999 of it, fldNames = first line of it, labelRect = "10,10,160,24", fldRect = "150,9,600,29", bump = 24, numFields = the num of items of fldNames, i = 0

  

  new inv proj outPath

  set the style of this wd to scrolling

  set the editBG to true

  set the showFill to false

  lock screen

  lock messages

  set cursor to busy

  

  repeat for each item aName of fldNames

    put value(aName) into fldName

    set the showpen to false

    choose draw text tool

    drag from item 1 to 2 of labelRect to item 3 to 4 of labelRect

    choose browse tool

    set the textData of last bg part to fldName & ":"

    set the textColor of last bg part to 256

    set the showpen to true

    new fld

    set the rect of last bg part to fldRect

    add 1 to i

    if i = numFields then exit repeat

    add bump to item 2 of labelRect

    add bump to item 4 of labelRect

    add bump to item 2 of fldRect

    add bump to item 4 of fldRect

  end repeat

  

  local newSize = item 3 of fldRect + 10 & comma & item 4 of fldRect + 10, numRecs = the num of lines of inData, recNum = 0

  set the backsize of this bg to newSize

  add 15 to item 1 of newSize

  add 15 to item 2 of newSize

  set the windowSize of this wd to newSize

  center this wd

  

  set the wordDel to comma

  repeat for each line aRecord of inData

    add 1 to recNum

    put 1 into fldNum

    repeat for each word aField of aRecord

      put value (aField) into bg fld fldNum

      add 1 to fldNum

    end repeat

    if recNum < numRecs then new cd

  end repeat

  

  set the itemDel to ":"

  set the name of this wd to last item of inPath

  

  open cd 1 of this wd

  put merge("[[numRecs]] records consisting of [[numFields]] fields each successfully imported")

end mouseUp


function outPath inPath

  set the itemDel to ":"

  set the wordDel to "."

  return first word of last item of inPath & ".sc45"

end outPath


HTH,
-Mark

codegreen

unread,
Aug 14, 2019, 4:42:53 PM8/14/19
to SuperCard Discussion
D-OH! I suppose it woulda been handy to name those new objects too... ;-)

    set the name of last bg part to fldName & "_label"

    set the textData of last bg part to fldName & ":"

    set the textColor of last bg part to 256

    set the showpen to true

    new fld

    set the rect of last bg part to fldRect

    set the name of last bg part to fldName

    add 1 to i

    if i = numFields then exit repeat

    add bump to item 2 of labelRect

    add bump to item 4 of labelRect

    add bump to item 2 of fldRect

    add bump to item 4 of fldRect

  end repeat

  

  local newSize = item 3 of fldRect + 10 & comma & item 4 of fldRect + 10, numRecs = the num of lines of inData, recNum = 0

  set the backsize of this bg to newSize

  add 15 to item 1 of newSize

  add 15 to item 2 of newSize

  set the windowSize of this wd to newSize

  center this wd

  

  set the wordDel to comma

  repeat for each line aRecord of inData

    add 1 to recNum

    put 1 into fldNum

    repeat for each word aField of aRecord

      set the text of bg fld fldNum to value(aField)

      add 1 to fldNum

    end repeat

    if recNum < numRecs then new cd

  end repeat

  

  set the itemDel to ":"

  set the name of this wd to last item of inPath

  open cd 1 of this wd

  put merge("[[numRecs]] records consisting of [[numFields]] fields each successfully imported")

end mouseUp


function outPath inPath

  set the itemDel to ":"

  set the wordDel to "."

  return first word of last item of inPath & ".sc45"

end outPath


-Mark

codegreen

unread,
Aug 14, 2019, 7:13:38 PM8/14/19
to superca...@googlegroups.com
Apparently in recent OS versions (I originally wrote this example today in Snow Leopard) there's a LOT more Mac Toolbox overhead involved in adding cards to scrolling windows than to plain ones, so here in Sierra postponing that until the end boosts performance significantly (upwards of 500%). This update also only maintains a single copy of the input data (instead of two), which if there's a lot of it might matter.

Hopefully that justifies posting yet another copy...  =:-O

on mouseUp

  answer file "Select a file to import:" with extension "csv"

  if it = "" then exit script

  local inPath = it, outFile = outPath(inPath)

  ask file "Save imported data as:" with outFile

  if it = "" then exit script

  put it into outPath

  

  open file inPath

  read from file inPath until eof

  close file inPath

  if it = "" then exit script

  set cursor to watch

  lock messages

  

  local fldNames = first line of it, labelRect = "10,10,140,24", fieldRect = "150,9,600,29", bump = 24, numFields = the num of items of fldNames, i = 0

  delete first line of it

  new inv proj outPath

  lock screen

  set the editBG to true

  set the showFill to false

  

  repeat for each item aName of fldNames

    put value(aName) into fldName

    set the showpen to false

    choose draw text tool

    drag from item 1 to 2 of labelRect to item 3 to 4 of labelRect

    choose browse tool

    set the name of last bg part to fldName & "_label"

    set the textData of last bg part to fldName & ":"

    set the textColor of last bg part to 256

    set the showpen to true

    new fld

    set the rect of last bg part to fieldRect

    set the name of last bg part to fldName

    add 1 to i

    if i = numFields then exit repeat

    put offsetRect(labelRect, 0, bump) into labelRect

    put offsetRect(fieldRect, 0, bump) into fieldRect

  end repeat

  

  local newSize = item 3 of fieldRect + 10 & comma & item 4 of fieldRect + 10, numRecs = the num of lines of it, recNum = 0

  set the backsize of this bg to newSize

  set the windowSize of this wd to item 1 of newSize + 15, item 2 of newSize + 15

  center this wd

  

  set the wordDel to comma

  repeat for each line aRecord of it

    add 1 to recNum

    put 1 into fldNum

    repeat for each word aField of aRecord

      set the text of bg fld fldNum to value(aField)

      add 1 to fldNum

    end repeat

    if recNum < numRecs then new cd

  end repeat

  

  set the itemDel to ":"

  set the name of this wd to last item of inPath

  set the style of this wd to scrolling

  open cd 1 of this wd

  put merge("[[numRecs]] records consisting of [[numFields]] fields each successfully imported")

end mouseUp


function outPath inPath

  set the itemDel to ":"

  set the wordDel to "."

  return first word of last item of inPath & ".sc45"

end outPath 

Live and learn! ;-)

-Mark

Bill Bowling

unread,
Aug 15, 2019, 12:13:01 PM8/15/19
to 'Mark Lucas' via SuperCard Discussion
Mark,

Even tough this is not my thread, thanks for the examples below.

I see you use the local keyword. Is this simply a technique  or style that you use. Or do you have the explicit global variable property set to true?

Also, if this was a very large file to imported would the free size become an issue. And if so how you handle that?

Regards,

Bill


--
You received this message because you are subscribed to the Google Groups "SuperCard Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to supercard-tal...@googlegroups.com.

codegreen

unread,
Aug 15, 2019, 1:17:18 PM8/15/19
to superca...@googlegroups.com
On Thursday, August 15, 2019 at 12:13:01 PM UTC-4, Bill Bowling wrote:
 
I see you use the local keyword. Is this simply a technique  or style that you use.

Just a personal preference. Horizontal space is easier to come by than vertical when editing scripts nowadays, so glomming together multiple variable declarations and assignments on a single line lets me see more of the actual program logic at once than if each occupied a separate line.

Also, if this was a very large file to imported would the free size become an issue. And if so how you handle that?

Do you mean free space in the target project? If so, there shouldn't be any. FWIW I just ran this script on a CSV file with fifty thousand records in it, and the created project had no free space when the script completed. BTW if you're planning on importing whopping huge files you'll probably want to stick a progress message of some sort down at the bottom of the main record import loop:

  if recNum mod 100 = 0 then put recNum && "of" && numRecs


Or do you mean RAM to hold the CSV data in? If so, then you could just read it in smaller fixed-size chunks (trimming off the slop from any partial trailing record and either prepending it to the data from the following read or backing up the file read offset by its length).

Does that make sense?

-Mark

Bill Bowling

unread,
Aug 15, 2019, 2:00:34 PM8/15/19
to superca...@googlegroups.com
I have an older project (5gb) that has about a 100 cards. Each of those cards has 3 bg flds that hold 6.6 mb each. This project will search for an employee id for each card card (bid month) and assemble a roster for each month. Needless to say for every month (or card) i’m replacing my 3 6.6mb variables and replacing the completed output into a bg fld. I do take the data fields and put them in variables as this improves speed.

I have had problems in the past with this project getting slow or crashing. To resolve this I added a trap in the repeat loop that sends a mouseup to “parse data” button of every card to check for the freesize and compact if the value is above a certain value. This resulted in spinning beach balls … so now I just compact on every repeat.

I know there have been some recent additions 4.8.x to clobbering variables in the last few updates, but have yet to incorporate them as this project works fine as is and is not used that often.

So what I’m looking for is thoughts on managing free size and overhead

Bill
'

On Aug 15, 2019, at 10:17 AM, 'codegreen' via SuperCard Discussion <superca...@googlegroups.com> wrote:

On Thursday, August 15, 2019 at 12:13:01 PM UTC-4, Bill Bowling wrote:
 
I see you use the local keyword. Is this simply a technique  or style that you use.

Just a personal preference. Horizontal space is easier to come by than vertical when editing scripts nowadays, so glomming together multiple variable declarations and assignments on a single line lets me see more of the actual program logic at once than if each occupied a separate line.

Also, if this was a very large file to imported would the free size become an issue. And if so how you handle that?

Do you mean free space in the target project? If so, there shouldn't be any. FWIW I just ran this script on a CSV file with fifty thousand records in it, and the created project had no free space when the script completed. BTW if you're planning on importing whopping huge files you'll probably want to stick a progress message of some sort down at the bottom of the main record import loop:

  if recNum mod 100 = 0 then put recNum && "of" && numRecs


Or do you mean RAM to hold the csv data in? If so, then you could just read it in smaller fixed-size chunks (trimming off the slop from any partial trailing record and either prepending it to the data from the following read or backing up the file read offset by its length).

Does that make sense?

-Mark
--
You received this message because you are subscribed to the Google Groups "SuperCard Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to supercard-tal...@googlegroups.com.

codegreen

unread,
Aug 15, 2019, 3:53:43 PM8/15/19
to SuperCard Discussion

On Thursday, August 15, 2019 at 2:00:34 PM UTC-4, Bill Bowling wrote:
I have an older project (5gb) that has about a 100 cards. Each of those cards has 3 bg flds that hold 6.6 mb each. This project will search for an employee id for each card card (bid month) and assemble a roster for each month. Needless to say for every month (or card) i’m replacing my 3 6.6mb variables and replacing the completed output into a bg fld. I do take the data fields and put them in variables as this improves speed.

I have had problems in the past with this project getting slow or crashing. To resolve this I added a trap in the repeat loop that sends a mouseup to “parse data” button of every card to check for the freesize and compact if the value is above a certain value. This resulted in spinning beach balls … so now I just compact on every repeat.

OK first of all you should always try to avoid explicit sends, which are close to a couple of orders of magnitude slower than letting a message fall through the regular message path. That overhead is trivial here, but just noting this for the future.

Second when you compact, the operation compacts the entire project (not just one card) which is why it's so expensive, and why you if you can avoid it you probably shouldn't just do it willy-nilly every time through your loop. So if you're under the impression you have to issue a compact to each card (not sure if that's what you meant) then you're doing extra work for basically no benefit. Same goes for measuring freesize (i.e. the reported value refers to the entire project).

Third compaction is severely I/O bound (meaning the speed at which it happens is determined almost entirely by your disk throughput). If you're running on a spinner then as a general rule you want to optimize it regularly to be sure your project isn't occupying a breadcrumb trail of linked sectors scattered over the disk, and that your free space is concentrated in big contiguous blocks - otherwise things can get gruesomely slow pretty quickly. 

Ideally though you'd want to run this operation on faster media, say an SSD or (if you've got the memory) even a RAM disk. Internal SSDs are preferable to external ones because the interface is usually much faster. 

There are a number of child-proof utilities for creating RAM disks on OS X, but you can just do it from the Terminal if you're comfortable there. For example (assuming you have enough memory) to create an 8GB Ram disk you'd use:
 
 diskutil erasevolume HFS+ RAMDisk `hdiutil attach -nomount ram://16777216`

For a 6GB project, compacting on a RAM disk versus even an empty 10K RPM mechanical one should yield several orders of magnitude performance increase.

For example I just copied the roughly 200MB project I created by importing the 50,000 line CSV file I mentioned earlier to a RAM disk here, and ran a simple script that looped through the whole project and deleted every other card, then compacted it. The compact operation took only about a second.

4.8x does include the ability to release variables, but it's intended more for use with globals (i.e., to reduce clutter in the global namespace). In your case just putting empty into your local vars should probably work just as well to free up whatever memory they use during your loops.

Does that answer your question?

-Mark

Bill Bowling

unread,
Aug 15, 2019, 4:33:42 PM8/15/19
to superca...@googlegroups.com
It does indeed.

I am running a mac mini and mac air … both with SSD drives.

Perhaps it’s time revisit my old work, need to figure out where the feesize is coming from and why it’s slows down towards the end of loop. Just like Chris said yesterday … still having fun.

Just one more question(s). Most of my work involves using one to many cards on a single background to break apart text data into individual records.

Usually I have a bg btn with a script to parse the data of each card into some type of new record (usual .csv text)

Once the scripting is done, I then have a bg btn that repeats with the number of cds of that bg that sends a mouseup to the btn that parses the data.

So you're saying … avoid the “send MouseUp”. In which case I can move my btn script to the  bg or or proj script as handler or function. Then use the “go cd” and handler or function.

If I stick to my old ways, how important is it to lock messages?

Does setting the cursor to busy many times during the repeat affect much?

I used to know this, but what is the keyboard Cmd to edit the bg and proj script.

Thanks so much.

Bill





--
You received this message because you are subscribed to the Google Groups "SuperCard Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to supercard-tal...@googlegroups.com.

codegreen

unread,
Aug 15, 2019, 4:49:00 PM8/15/19
to SuperCard Discussion
Is the script in each of these card btns different? Could you post some actual code for me to look at?

FWIW you should only need to set the cursor to busy once - after that it will keep counting all by itself until script execution completes. It's not a very expensive operation though, so skipping any unneeded calls isn't likely to speed things up much.

Control-C edits the card script
Control-B edits the bkgnd script
Control-W edits the window script
Control-P edits the project script
Control-S edits the SharedFile project script

-Mark

Uli Kusterer

unread,
Aug 18, 2019, 5:26:05 AM8/18/19
to superca...@googlegroups.com
If you can, compact only at the very end. File formats like SuperCard's
generally re-use free space in a file, so if you compact on every
repeat, you're closing the gaps that SuperCard would have re-used,
forcing SuperCard to move huge chunks of data around on every repeat,
instead of only once at the end.


On 8/15/2019 8:00 PM, 'Bill Bowling' via SuperCard Discussion wrote:
> I have an older project (5gb) that has about a 100 cards. Each of
> those cards has 3 bg flds that hold 6.6 mb each. This project will
> search for an employee id for each card card (bid month) and assemble
> a roster for each month. Needless to say for every month (or card) i’m
> replacing my 3 6.6mb variables and replacing the completed output into
> a bg fld. I do take the data fields and put them in variables as this
> improves speed.
>
> I have had problems in the past with this project getting slow or
> crashing. To resolve this I added a trap in the repeat loop that sends
> a mouseup to “parse data” button of every card to check for the
> freesize and compact if the value is above a certain value. This
> resulted in spinning beach balls … so now I just compact on every repeat.
>
> I know there have been some recent additions 4.8.x to clobbering
> variables in the last few updates, but have yet to incorporate them as
> this project works fine as is and is not used that often.
>
> So what I’m looking for is thoughts on managing free size and overhead
>
> Bill
> '
>
>> On Aug 15, 2019, at 10:17 AM, 'codegreen' via SuperCard Discussion
>> <superca...@googlegroups.com
>> <mailto:supercard-tal...@googlegroups.com>.
>> <https://groups.google.com/d/msgid/supercard-talk/9627db16-91cd-4663-bf70-a1d18b850d61%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "SuperCard Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to supercard-tal...@googlegroups.com
> <mailto:supercard-tal...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/supercard-talk/6F12F4DB-8646-4E8E-B487-8C75DB12DDE1%40me.com
> <https://groups.google.com/d/msgid/supercard-talk/6F12F4DB-8646-4E8E-B487-8C75DB12DDE1%40me.com?utm_medium=email&utm_source=footer>.

codegreen

unread,
Aug 18, 2019, 2:24:03 PM8/18/19
to SuperCard Discussion
On Sunday, August 18, 2019 at 5:26:05 AM UTC-4, Uli Kusterer wrote:
If you can, compact only at the very end. File formats like SuperCard's
generally re-use free space in a file, so if you compact on every
repeat, you're closing the gaps that SuperCard would have re-used,
forcing SuperCard to move huge chunks of data around on every repeat,
instead of only once at the end.

I thought that was obvious, but I suppose in reality that's mainly just because I've known how it works for so long! ;-)

So thanks for pointing that out...

While we're on the subject of stuff that bears repeating here, I should probably also mention that if you're not regularly editing/viewing it or using the find command to search it then fields are just not a very efficient way to store raw bulk text data.

You'll get much more efficient storage/retrieval if you keep that stuff in either user properties (which have a lot less internal overhead) or external files (which sidestep the project compaction issue entirely). Depending on your use case that inefficiency may or may not be worth the tradeoffs, but it's definitely another factor worth considering...

Even if you decide you really do need to keep the data in fields, the fact that you're currently storing across multiple cards suggests that you might be able to break this up into multiple smaller projects (so there'd be much less data to rearrange when something in one of those fields changes). So for example if a typical update operation now requires revisiting only one card's data then you're imposing a lot more busywork for SC when it compacts the project file than you'd face if that segment were in its own project.

Again when it comes to the compaction process itself, disk throughput is typically by far the biggest factor in determining how fast it happens. You say you have the project on an SSD now, but that still covers a WIDE range of speeds.

It might be useful (if you don't have it already) to DL something like the freeware Blackmagicdesign Disk Speed Test util and see what kind of throughput you're actually getting. For reference here I get only about 900MB/s read and 800MB/s write speeds from the stock SSD in my trash can Mac Pro, but on the M.2 SSD sitting on a PCI card in my (technically obsolete) cheese grater I get nearly three times that speed. Running from a RAM disk though I can pull over 4GB/s reading and over 7GB/s writing. So depending on how old your Mini is you still might find you can improve compaction performance dramatically by copying the project over to a RAM Disk when you're chewing on it (which again in the case of smaller separate projects means you'll likely need far less free RAM to do so). But ya gotta measure to know...

Bill Bowling

unread,
Aug 19, 2019, 2:21:46 PM8/19/19
to 'Mark Lucas' via SuperCard Discussion
Fascinating ….

For decades I have been storing text data in fields and then putting that text into local or global variables and when done, putting that data data back into data fields. I’m I correct this is where my free space is coming from?

My go to “find” is the offset or lineoffset.

When I get chance I will explore the user property. I’m not going to bother with storing the data externally as the number of files will become messy in my eyes.

The ram disk option is intriguing though. Next time it rains I will experiment with this - but summer calls!

Again I have a working project, but I know it do better.

Not to take this thread off the rails, but looking into future is there a better 64 bit solution to parsing text. 

Bill

--
You received this message because you are subscribed to the Google Groups "SuperCard Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to supercard-tal...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/supercard-talk/dd387fb5-35c2-4f36-9a17-670c786ec703%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages