large data- Memory reduction inquiries

203 views
Skip to first unread message

Chiyuri

unread,
Aug 18, 2021, 12:43:37 PM8/18/21
to or-tools-discuss
noob here,

Is there possibly another way/approach I can take to reduce the memory usage?
I have 3 Boolean variables that are used to calculate constants where constraints are applied. 

I have been looking at the bus driver scheduling problem which is a bit complicated so I went smaller and tried the nurse scheduling problem which provided great solutions but only for small amounts of data. In my case, i'm working with 83000+ data points resulting in a memory error. I am using 6 num_search workers as I only have 6 cores.

I've considered trying to cascade my results to another model but isn't sure how to forward the values/results of the Boolean variables to the next model (if that is possible).

The code below analyses when an aircraft is to take a certain action downlink, processing or take pictures based on available onboard memory. I'm thinking if there is anything I've done wrong or can do to improve the processing time and reduce my PC's memory usage. If I've completely written something wrong, please also let me know, any feedback would be greatly appreciated as I'm still learning.

Note: I've started with using 500 - 4000 variables which works ok, but isn't able to generate results beyond 4000

       horizon =4000
       downlink = [model.NewBoolVar("") for _ in range(0,horizon)]
        processing = [model.NewBoolVar("") for _ in range(0,horizon)]
        take_pictures=[model.NewBoolVar("") for _ in range(0,horizon)]

        for s in all_shifts:

            model.Add(downlink[s]+processing[s]+take_pictures[s] <= 1)
            model.Add(country_data_list[s][2] == day_data_list[s][2] and country_data_list[s][2] == 1).OnlyEnforceIf(take_pictures[s])
            model.Add(gnd_data_list[s][2] == 1).OnlyEnforceIf(downlink[s])



        num_downlinked = 0
        num_processed =0
        num_pics=0

        for s in all_shifts:
            num_downlinked += downlink[s]
            num_pics +=take_pictures[s] - downlink[s]
            num_processed += processing[s]- downlink[s]


            # coversion for number of processed images ==> to 1 pic
            totaL_to_process = (num_pics*int(image_mem/process_im_mem))
            # conversion for number of processed to 1 downlink....downlink cannot send less than 1 image processed
            total_to_downlink = (num_processed * int(downlink_data_rate / process_im_mem))

            model.Add(num_pics <= max_photos_taken)
            model.Add(num_processed <= totaL_to_process)
            model.Add(num_downlinked <= total_to_downlink)


        model.Maximize(sum(downlink+processing+take_pictures))

        solver = cp_model.CpSolver()
        
        solver.parameters.max_time_in_seconds = 30
        solver.parameters.log_search_progress = True
        solver.parameters.num_search_workers = 6

        solver.Solve(model)

        print([(country_data_list[s][0], solver.Value(take_pictures[s]), solver.Value(processing[s]), solver.Value(downlink[s])) for s in all_shifts])

        

Xiang Chen

unread,
Aug 18, 2021, 5:14:37 PM8/18/21
to or-tools...@googlegroups.com
Two things:

model.Add(country_data_list[s][2] == day_data_list[s][2] and country_data_list[s][2] == 1).OnlyEnforceIf(take_pictures[s])

should be:

model.Add(country_data_list[s][2] == day_data_list[s][2]).OnlyEnforceIf(take_pictures[s])
model.Add( country_data_list[s][2] == 1).OnlyEnforceIf(take_pictures[s])

And maybe you could increase the granularity or try to decompose the problem.

--
You received this message because you are subscribed to the Google Groups "or-tools-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to or-tools-discu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/or-tools-discuss/e018501e-2298-4cd5-a3ca-1533f2f53f51n%40googlegroups.com.

Chiyuri

unread,
Aug 19, 2021, 9:11:31 AM8/19/21
to or-tools-discuss
Thanks Xiang
Message has been deleted
Message has been deleted
Message has been deleted

Chiyuri

unread,
Aug 22, 2021, 10:39:06 AM8/22/21
to or-tools-discuss
So, I've separated it to chunks and carried over my total variables each time for the next.

It works for the first 4 reps and I'm now getting this error for my last (I'm testing it on the first 6000 data):

value += coef * solution.solution[expr.Index()]
return self._values[key]
IndexError: list index out of range


Is there anything I can do to resolve this?

 my  code is below. Apologies if it looks a little messy


horizon= 6000
i = 0
z = 0
b = 0
c = 0
n = 0
n1 = 0

 while a in all_shifts:

            b=c
            c=b + int(horizon /6)
            z = c-b
            
            
            num_downlinked = 0

            downlink = [model.NewBoolVar("") for _ in range(0, z)]
            processing = [model.NewBoolVar("") for _ in range(0, z)]
            take_pictures = [model.NewBoolVar("") for _ in range(0, z)]


            #0, 1000 , 2000, 3000   => b,c incements in 1000, (100-200,200-3000) to extract data from original list
            while n1 in range(b, c):

                if n1>0:
                    s1=n1-b
                else:
                    s1=0
                
                model.Add(downlink[s1]+processing[s1]+take_pictures[s1] <= 1)
                model.Add(country_data_list[n1][2] == day_data_list[n1][2]).OnlyEnforceIf(take_pictures[s1])
                model.Add( country_data_list[n1][2] == 1).OnlyEnforceIf(take_pictures[s1])
                model.Add(gnd_data_list[n1][2] == 1).OnlyEnforceIf(downlink[s1])
                n1 += 1


            #0, 1000 , 2000, 3000
            for s in range(i, z):


                
                num_downlinked += downlink[s]
                #(int(downlink_data_rate/process_im_mem) for example 56
                num_processed += (processing[s]) - (time_interval*downlink[s]*(int(downlink_data_rate/process_im_mem)))
                num_pics += (take_pictures[s]*100) - (time_interval*downlink[s]*(int((downlink_data_rate/image_mem)*100))) 



                # coversion for number of processed images ==> to 1 pic
                totaL_to_process = (num_pics*int((image_mem/process_im_mem)/100))
                # conversion for number of processed to 1 downlink....downlink cannot send less than 1 image processed
                total_to_downlink = ((num_processed) * int((1000*process_im_mem/downlink_data_rate)))



                model.Add(num_pics <= 100*max_photos_taken)
                
                model.Add(num_processed <= totaL_to_process)
                
                model.Add(num_downlinked <= (total_to_downlink))

                images_taken.append(num_pics)
                processed_intervals.append(num_processed)
                dumped.append(num_downlinked)
                binary_table.append([take_pictures[s],processing[s],downlink[s]])
                 

      
            model.Maximize(sum(downlink + processing + take_pictures))
            # Creates the solver and solve.
            solver = cp_model.CpSolver()
            solver.parameters.search_branching=cp_model.LP_SEARCH

            
            solver.parameters.num_search_workers = 6
            solver.parameters.max_time_in_seconds = 120
            

            solver.Solve(model)
            
            print([(country_data_list[n][0], solver.Value(take_pictures[s]), solver.Value(processing[s]), 0,solver.Value(downlink[s])) for s in range(i,z) for n in range(b,c)])
            print(solver.Value(sum(take_pictures)), solver.Value(sum(processing)), solver.Value(sum(downlink)))
            print(solver.Value(num_pics)/100,solver.Value(num_processed),solver.Value(num_downlinked)/1000)
            
            print('number of pics taken:',solver.Value(sum(take_pictures)), 'number of images processed ',(solver.Value(sum(processing))/(image_mem/process_im_mem)), 'number of images downloaded ',(solver.Value(sum(downlink))/(image_mem/downlink_data_rate)))
            print(solver.ObjectiveValue())
            
            a = c

On Wednesday, August 18, 2021 at 10:14:37 PM UTC+1 xiang10...@gmail.com wrote:
Reply all
Reply to author
Forward
0 new messages