Whether it's a scheduled task or not is irrelevant (unless you plan to use onComplete in CF10's schedule - which is just awesome).
Whatever process you're using now to import the data, when your cfm file is triggered you import, say, the first 100 items - in batches of 100 for this example (*see note below). Whatever the last item was, set that record's unique identifier to a temp variable (or whatever value you want to use to identify the last record). Then call the same import file at the bottom using cflocation and give it the variable for the unique ID. Using that ID, tell your script to start from the next record. Each time the file finishes (and before it calls itself again) the process is flushed from memory (which is the whole point of batching).
*note: The batch size depends on how much RAM is available to the JVM. Some clients I have can do batches 50k and higher batches, but it really doesn't matter if you use low numbers. Even it you set it to a low number, it will still import very quickly using this process. The only difference is that you might have to run a query each time at the top of the file (I suggest running a manual query and limiting it to return only 100 records - or whatever your batch size is).
There are a bunch of other things I'd do of course, like set a tracking variable that allows me to cancel a long-running import with a click from within the webtop, setting <cfsetting/> at the top of the file with a decent length requestTimeout, using a condition around the cflocation to know if the import process is complete and to stop (don't want a non-ending loop), etc. But you get the idea.
I have one client that actually does import millions of records at a time using this process and it is quite fast and efficient (well, it takes many hours, but still faster than other solutions I've done in the past). And there is no need to run all your methods manually and then update each record a second time using setData(). *Note:* For that particular client I find it faster to import the data with no solr indexing. Once complete I index the data in batches using the same batch process.
Let me know how it goes for you.