<Out of memory. Program aborted.>

192 views
Skip to first unread message

Zhixuan Huan

unread,
Feb 27, 2021, 10:00:09 PM2/27/21
to DynamoRIO Users
Did anyone run into the OOM issue? I got this when I was running the instcalls.c client.

Zhixuan Huan

unread,
Feb 27, 2021, 10:04:57 PM2/27/21
to DynamoRIO Users
After grepping in the source code, I figured this seems to be related with the heap memory, because the error prompt exists in core/heap.c. However, there seems to be no way to change the heap limit of dynamorio(there shouldn't be one in my opinion). At this time, I really have no clue about how to solve this.

Derek Bruening

unread,
Feb 28, 2021, 11:20:21 AM2/28/21
to dynamor...@googlegroups.com
As the docs explain https://dynamorio.org/using.html#sec_64bit_reach, the address space model is inherently limited to 2GB by default for reachable space.  There are all kinds of statistics recorded.  I would run with `-rstats_to_stderr` to get the key data on what is using which type of memory at the OOM point.

--
You received this message because you are subscribed to the Google Groups "DynamoRIO Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/ebc46521-2087-4633-ab90-d265822f5f47n%40googlegroups.com.

zxhuan

unread,
Mar 2, 2021, 8:11:18 PM3/2/21
to dynamor...@googlegroups.com
Hi Derek,

Thanks for your reply. I didn't get anything with that flag on. Is it supposed to print the error log to stderr or some log file?
PS. Is there any way to break the 2GB limit? 
PPS. I think the OOM error has to do with dr_insert_call_instrumentation() or dr_insert_mbr_instrumentation().I made another custom client that does some instruction counting job. Without invoking this function, the client runs normally. But the OOM error pops out when the function is invoked. Do you have insight about this?

You received this message because you are subscribed to a topic in the Google Groups "DynamoRIO Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dynamorio-users/4DiZR1GrIvY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSaBs98p8KBSC5f80HonQ09tJ2rRF8Gs3nRdT31ZWXEORA%40mail.gmail.com.

Derek Bruening

unread,
Mar 4, 2021, 11:10:33 AM3/4/21
to dynamor...@googlegroups.com
On Tue, Mar 2, 2021 at 8:11 PM zxhuan <huanz...@gmail.com> wrote:
Hi Derek,

Thanks for your reply. I didn't get anything with that flag on. Is it supposed to print the error log to stderr or some log file?

Are you running a graphical app with no attached console on Windows?   (Did you try a console app as a sanity check?)  As explained at https://dynamorio.org/using.html#sec_options under -stderr_mask, there is no stderr output from a graphical app unless you redirect it to a file "2> OUT".
 
PS. Is there any way to break the 2GB limit? 

The docs explain it pretty well I think, how only the code cache, the client lib (unless you set the option it mentions), and default client heap (unless you request otherwise) are in the 2GB region: DR's heap and stack are not.  So as you saw in the docs the client parts are under your control.  The cache cannot be split.  But the cache is never the biggest piece and we have never seen it hit the limit.  I suggest getting actual data and debugging what is going on before assuming the cache hit the reachability limit.  Is this instead a commit limit?  DR prints a code indicating what type of OOM it is.  You never pasted the actual output.
 
PPS. I think the OOM error has to do with dr_insert_call_instrumentation() or dr_insert_mbr_instrumentation().I made another custom client that does some instruction counting job. Without invoking this function, the client runs normally. But the OOM error pops out when the function is invoked. Do you have insight about this?

A clean call takes up more space, but to use so much cache without running a truly enormous amount of code is not likely.  Again, please get data, paste the actual OOM message, ideally get a callstack at the OOM point.  There are multiple types of OOM.  How big is this app (as in, how many basic blocks executed)?  You have given almost zero information on this problem.  This is not a normal thing to hit, so there is something unusual.
 

On Sun, Feb 28, 2021 at 11:20 AM 'Derek Bruening' via DynamoRIO Users <dynamor...@googlegroups.com> wrote:
As the docs explain https://dynamorio.org/using.html#sec_64bit_reach, the address space model is inherently limited to 2GB by default for reachable space.  There are all kinds of statistics recorded.  I would run with `-rstats_to_stderr` to get the key data on what is using which type of memory at the OOM point.

On Sat, Feb 27, 2021 at 10:04 PM Zhixuan Huan <huanz...@gmail.com> wrote:
After grepping in the source code, I figured this seems to be related with the heap memory, because the error prompt exists in core/heap.c. However, there seems to be no way to change the heap limit of dynamorio(there shouldn't be one in my opinion). At this time, I really have no clue about how to solve this.

On Saturday, February 27, 2021 at 10:00:09 PM UTC-5 Zhixuan Huan wrote:
Did anyone run into the OOM issue? I got this when I was running the instcalls.c client.

--
You received this message because you are subscribed to the Google Groups "DynamoRIO Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/ebc46521-2087-4633-ab90-d265822f5f47n%40googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "DynamoRIO Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dynamorio-users/4DiZR1GrIvY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSaBs98p8KBSC5f80HonQ09tJ2rRF8Gs3nRdT31ZWXEORA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "DynamoRIO Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dynamorio-use...@googlegroups.com.

zxhuan

unread,
Mar 4, 2021, 2:53:10 PM3/4/21
to dynamor...@googlegroups.com
Thanks for your reply. I am running dynamorio in console on Linux the whole time. Sorry for the missing information. I was running the older version(7.0 I think), and it did not give me any information on the type of OOM. So I switched to the newest version. This is what I got:

Client instrcalls is running
Data file /home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/instrcalls.a.out.24312.0000.log created
PARSEC Benchmark Suite
Number of Simulations: 40000,  Number of threads: 1 Number of swaptions: 64
<Application /home/jason/parsec-llvm-ir/swaptions/a.out (24312).  Out of memory.  Program aborted.  Source C, type 0x0000000000000001, code 0x000000000000000c.>


Regarding the size of the app, this is what I got with bbcount:

Instrumentation results:
3451172558 basic block executions
      2391 basic blocks needed flag saving
      7358 basic blocks did not


I will try to get the stack trace shortly.

Is this instead a commit limit?

Can you explain more on that?

Thanks,
Jason

Derek Bruening

unread,
Mar 4, 2021, 3:38:59 PM3/4/21
to dynamor...@googlegroups.com
On Thu, Mar 4, 2021 at 2:53 PM zxhuan <huanz...@gmail.com> wrote:
Thanks for your reply. I am running dynamorio in console on Linux the whole time. Sorry for the missing information. I was running the older version(7.0 I think), and it did not give me any information on the type of OOM. So I switched to the newest version. This is what I got:

Client instrcalls is running
Data file /home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/instrcalls.a.out.24312.0000.log created
PARSEC Benchmark Suite
Number of Simulations: 40000,  Number of threads: 1 Number of swaptions: 64
<Application /home/jason/parsec-llvm-ir/swaptions/a.out (24312).  Out of memory.  Program aborted.  Source C, type 0x0000000000000001, code 0x000000000000000c.>

C==OOM_COMMIT, 1==VMM_HEAP, 0xc==ENOMEM.

So it is unrelated to the address space at all (i.e., unrelated to the 2GB code cache limit).  The kernel is refusing to hand out more pages at mprotect() even before such pages are touched.

I would check your system's overcommit settings (/proc/sys/vm/overcommit_memory) and overall memory situation.
 
Regarding the size of the app, this is what I got with bbcount:

Instrumentation results:
3451172558 basic block executions
      2391 basic blocks needed flag saving
      7358 basic blocks did not

Dynamic executions do not matter: I suppose the unique count is the sum of the bottom two numbers, which at <10K is tiny.

BTW it looks like -rstats_to_stderr being printed at the OOM exit point was missing until https://github.com/DynamoRIO/dynamorio/commit/94c8ad43e62c39824cf68e7a1d4ddb7661477fa6.  That's why it was not printing.  The latest release at  https://github.com/DynamoRIO/dynamorio/releases/latest will print the stats at the OOM point which will show the memory breakdown and the block count.

But it sure seems like DR is not using much memory if it only built 10K blocks, and it's that the app (or other things on the system) has taken all the memory and the overcommit settings are causing failures.
 

zxhuan

unread,
Mar 9, 2021, 3:23:32 AM3/9/21
to dynamor...@googlegroups.com
Thanks for the informative reply. I have tried different overcommit settings on my system, one being vm.overcommit_memory = 1(in this case the overcommit ratio shouldn't matter but it is 90 if you ask) and the other being vm.overcommit_memory = 2, vm.overcommit_ratio = 90. Neither works. My system has 64 GiB ram and no swap. There is abundant memory before launching dynamorio(only 3.8 GiB in use). The run will take ~ 8 GiB memory before hitting an OOM error. Below is what I got with the -rstats_to_stderr flag:

              Peak threads under DynamoRIO control :                 1
                              Threads ever created :                 1
                                 System calls, pre :                70
                                System calls, post :                62
                                 Application mmaps :                27
                               Application munmaps :                 1
                   Basic block fragments generated :              8073
                         Trace fragments generated :               441
             Peak fcache combined capacity (bytes) :            655360
                    Peak fcache units on live list :                13
                Peak special heap capacity (bytes) :            155648
                      Peak heap units on live list :             32685
                       Peak stack capacity (bytes) :            147456
                        Peak heap capacity (bytes) :        8582631424
                 Peak total memory from OS (bytes) :        8900859616
              Peak vmm blocks for unreachable heap :           1887422
                         Peak vmm blocks for stack :                42
      Peak vmm blocks for unreachable special heap :                 5
      Peak vmm blocks for unreachable special mmap :                 7
                Peak vmm blocks for reachable heap :               206
                         Peak vmm blocks for cache :               208
        Peak vmm blocks for reachable special heap :                78
        Peak vmm blocks for reachable special mmap :              2701
            Peak vmm virtual memory in use (bytes) :        7744180224


I tried to set the breakpoint at report_low_on_memory() of heap.c and run the debug version, but only got the below error info:

<CURIOSITY : (0) && "running low on vm reserve" in file /home/jason/Downloads/github_dynamorio/dynamorio/core/heap.c line 1578
version 8.0.18684, custom build
-no_dynamic_options -client_lib '/home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/libinstrcalls.so;0;' -client_lib64 '/home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/libinstrcalls.so;0;' -code_api -stack_size 56K -signal_stack_size 32K -max_elide_jmp 0 -max_elide_call 0 -early_inject -emulate
0x00007ffdc14d6030 0x000000007117937b
0x00007ffdc14d60b0 0x000000007117dcee
0x00007ffdc14d6180 0x0000000071181c60
0x00007ffdc14d6280 0x0000000071185cfb
0x00007ffdc14d64a0 0x000000007118158e
0x00007ffdc14d64e0 0x0000000071181737
0x00007ffdc14d6510 0x00000000711e5a58
0x00007ffdc14d6550 0x00000000711e5cfb
0x00007ffdc14d6580 0x00000000711f4632
0x00007ffdc14d65a0 0x0000000076017286
0x00007fff8e10dea0 0x0000000000000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/lib64/debug/libdynamorio.so=0x0000000071000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/libinstrcalls.so=0x0000000072000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/ext/lib64/debug/libdrx.so=0x0000000077000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/ext/lib64/debug/libdrreg.so=0x0000000078000000
/home/jason/Downloads/github_dynamorio/dynamor>
<vmm_heap_commit oom: timeout and retry>
<Application /home/jason/parsec-llvm-ir/swaptions/a.out (11271).  Internal Error: DynamoRIO debug check failure: Not implemented @/home/jason/Downloads/github_dynamorio/dynamorio/core/unix/os.c:1414 (0)
(Error occurred @8518 frags)
version 8.0.18684, custom build
-no_dynamic_options -client_lib '/home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/libinstrcalls.so;0;' -client_lib64 '/home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/libinstrcalls.so;0;' -code_api -stack_size 56K -signal_stack_size 32K -max_elide_jmp 0 -max_elide_call 0 -early_inject -emulate
0x00007ffdc14d5d50 0x00000000710e5514
0x00007ffdc14d5fa0 0x00000000712c8455
0x00007ffdc14d5fc0 0x000000007117a15b
0x00007ffdc14d6060 0x000000007117d9a2
0x00007ffdc14d60b0 0x000000007117e247
0x00007ffdc14d6180 0x0000000071181c60
0x00007ffdc14d6280 0x0000000071185cfb
0x00007ffdc14d64a0 0x000000007118158e
0x00007ffdc14d64e0 0x0000000071181737
0x00007ffdc14d6510 0x00000000711e5a58
0x00007ffdc14d6550 0x00000000711e5cfb
0x00007ffdc14d6580 0x00000000711f4632
0x00007ffdc14d65a0 0x0000000076017286
0x00007ffd7b2c2e20 0x0000000000000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/lib64/debug/libdynamorio.so=0x0000000071000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/libinstrcalls.so=0x0000000072000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/ext/lib64/debug/libdrx.so=0x0000000077000000
/home/jason/Downloads/github_dynamorio/dynamorio/build/ext/lib64/debug/libdrreg.so=0x0000000078000000
/home/jason/Downloads/github_dynamorio/dynamor>


Looks like the error has something to do with the virtual address space reservation. Do you have any clue on this? 

Derek Bruening

unread,
Mar 9, 2021, 5:02:37 PM3/9/21
to dynamor...@googlegroups.com
On Tue, Mar 9, 2021 at 3:23 AM zxhuan <huanz...@gmail.com> wrote:
Thanks for the informative reply. I have tried different overcommit settings on my system, one being vm.overcommit_memory = 1(in this case the overcommit ratio shouldn't matter but it is 90 if you ask) and the other being vm.overcommit_memory = 2, vm.overcommit_ratio = 90. Neither works. My system has 64 GiB ram and no swap. There is abundant memory before launching dynamorio(only 3.8 GiB in use). The run will take ~ 8 GiB memory before hitting an OOM error. Below is what I got with the -rstats_to_stderr flag:

              Peak threads under DynamoRIO control :                 1
                              Threads ever created :                 1
                                 System calls, pre :                70
                                System calls, post :                62
                                 Application mmaps :                27
                               Application munmaps :                 1
                   Basic block fragments generated :              8073
                         Trace fragments generated :               441
             Peak fcache combined capacity (bytes) :            655360
                    Peak fcache units on live list :                13
                Peak special heap capacity (bytes) :            155648
                      Peak heap units on live list :             32685
                       Peak stack capacity (bytes) :            147456
                        Peak heap capacity (bytes) :        8582631424
                 Peak total memory from OS (bytes) :        8900859616
              Peak vmm blocks for unreachable heap :           1887422

Wow there is something very unusual here: 8G of heap (4K blocks) when the cache is that small?  Are you sure this is an unmodified version of instrcalls?  If I run instrcalls on an app that makes 6x as many basic blocks as your run here, the peak vmm unreachable heap blocks is 1000x less than this!  Who is allocating all that heap?  If you run -debug -loglevel 1 look at the end of the process log file for sthg like this:

Updated-at-end Process (max is total of maxes) heap breakdown:
BB Fragments: cur=    0K, max=  210K, #=   1952, 1=  280, new=  208K, re=   18K
Coarse Links: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 Future Frag: cur=    0K, max=   18K, #=   2529, 1=   48, new=   18K, re=   41K
 Frag Tables: cur=    0K, max=   66K, #=     11, 1=33328, new=   74K, re=   16K
  IBL Tables: cur=    0K, max=    7K, #=      6, 1= 2416, new=    7K, re=    0K
      Traces: cur=    0K, max=   73K, #=     67, 1=66520, new=   72K, re=    0K
  FC Empties: cur=    0K, max=    0K, #=    120, 1=   40, new=    0K, re=    8K
   Vm Multis: cur=    0K, max=    1K, #=   2103, 1=   96, new=    3K, re=  144K
          IR: cur=    0K, max=   32K, #=  66567, 1=  256, new=   46K, re= 3515K
  RCT Tables: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
    VM Areas: cur=    0K, max=   38K, #=   2947, 1= 4000, new=   33K, re=  347K
     Symbols: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  TH Counter: cur=    0K, max=    7K, #=    308, 1=   16, new=    0K, re=    7K
   Tombstone: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
Hot Patching: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  Thread Mgt: cur=    0K, max=   34K, #=      9, 1=33544, new=   32K, re=    2K
  Memory Mgt: cur=    0K, max=   16K, #=    118, 1= 7248, new=   15K, re=    2K
       Stats: cur=    0K, max=    9K, #=      2, 1= 6944, new=    9K, re=    0K
 SpecialHeap: cur=    0K, max=   38K, #=   1730, 1=   23, new=   38K, re=    0K
      Client: cur=    0K, max=  149K, #=  12947, 1=35944, new=  151K, re=  677K
     Lib Dup: cur=  159K, max= 4174K, #=  16173, 1=1887K, new= 4174K, re= 2524K
  Clean Call: cur=    0K, max=    0K, #=    365, 1=  200, new=    0K, re=   25K
       Other: cur=    0K, max=   70K, #=    416, 1=27000, new=   67K, re=    8K
Total cur usage:    159 KB
Total max (not nec. all used simult.):   4951 KB

 

zxhuan

unread,
Mar 9, 2021, 9:09:38 PM3/9/21
to dynamor...@googlegroups.com
Yes I am pretty sure I'm running the original instrcalls.c that comes with the source code tar ball. Here is the section in the log file:

Updated-at-end Process (max is total of maxes) heap breakdown:
BB Fragments: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K

Coarse Links: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 Future Frag: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 Frag Tables: cur=    1K, max=    1K, #=      3, 1=  288, new=    1K, re=    0K
  IBL Tables: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
      Traces: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  FC Empties: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
   Vm Multis: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
          IR: cur=    0K, max=    9K, #=   1523, 1=  104, new=   13K, re=   84K

  RCT Tables: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
    VM Areas: cur=   41K, max=   41K, #=2227653, 1= 4000, new=   36K, re=156615K

     Symbols: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  TH Counter: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K

   Tombstone: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
Hot Patching: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  Thread Mgt: cur=   32K, max=   32K, #=      2, 1=32768, new=   32K, re=    0K
  Memory Mgt: cur=   11K, max=   11K, #=    141, 1= 5032, new=    9K, re=    2K
       Stats: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 SpecialHeap: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
      Client: cur=    0K, max=    0K, #=5176703, 1=   88, new=    1K, re=202213K
     Lib Dup: cur=8374109K, max=8374140K, #=66900246, 1=9328K, new=8374169K, re=45583975K
  Clean Call: cur=    0K, max=    0K, #=      3, 1=  168, new=    0K, re=    0K
       Other: cur=   55K, max=   55K, #=     99, 1=22568, new=   58K, re=    1K
Total cur usage: 8374252 KB
Total max (not nec. all used simult.): 8374293 KB




Derek Bruening

unread,
Mar 9, 2021, 9:52:36 PM3/9/21
to dynamor...@googlegroups.com
On Tue, Mar 9, 2021 at 9:09 PM zxhuan <huanz...@gmail.com> wrote:
Yes I am pretty sure I'm running the original instrcalls.c that comes with the source code tar ball. Here is the section in the log file:

Updated-at-end Process (max is total of maxes) heap breakdown:
BB Fragments: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
Coarse Links: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 Future Frag: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 Frag Tables: cur=    1K, max=    1K, #=      3, 1=  288, new=    1K, re=    0K
  IBL Tables: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
      Traces: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  FC Empties: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
   Vm Multis: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
          IR: cur=    0K, max=    9K, #=   1523, 1=  104, new=   13K, re=   84K
  RCT Tables: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
    VM Areas: cur=   41K, max=   41K, #=2227653, 1= 4000, new=   36K, re=156615K
     Symbols: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  TH Counter: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
   Tombstone: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
Hot Patching: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
  Thread Mgt: cur=   32K, max=   32K, #=      2, 1=32768, new=   32K, re=    0K
  Memory Mgt: cur=   11K, max=   11K, #=    141, 1= 5032, new=    9K, re=    2K
       Stats: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
 SpecialHeap: cur=    0K, max=    0K, #=      0, 1=    0, new=    0K, re=    0K
      Client: cur=    0K, max=    0K, #=5176703, 1=   88, new=    1K, re=202213K
     Lib Dup: cur=8374109K, max=8374140K, #=66900246, 1=9328K, new=8374169K, re=45583975K

As you can see, this is the culprit category.  Some private library is using 8G of heap!  libinstrcalls.so only uses libc and ld.so and all they did was initialize so as you can imagine this is very strange.  I would suggest breakpoints on redirect_malloc to see what is going on, automated so you can aggregate over the 66 million calls (as shown: "#") to see the typical callstacks (load the private syms first of course).  You saw my result with 16K allocs vs your 66 million: something is very different with your system libraries or something.
 

Derek Bruening

unread,
Mar 9, 2021, 9:55:16 PM3/9/21
to dynamor...@googlegroups.com
You could also try turning off libc (cmake var DynamoRIO_USE_LIBC: see https://dynamorio.org/using.html#sec_extlibs) and making an instrcalls that doesn't need any private libraries and confirming that does not use 8G of heap just to load itself.
Reply all
Reply to author
Forward
0 new messages