--
You received this message because you are subscribed to the Google Groups "DynamoRIO Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/ebc46521-2087-4633-ab90-d265822f5f47n%40googlegroups.com.
You received this message because you are subscribed to a topic in the Google Groups "DynamoRIO Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dynamorio-users/4DiZR1GrIvY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSaBs98p8KBSC5f80HonQ09tJ2rRF8Gs3nRdT31ZWXEORA%40mail.gmail.com.
Hi Derek,Thanks for your reply. I didn't get anything with that flag on. Is it supposed to print the error log to stderr or some log file?
PS. Is there any way to break the 2GB limit?
PPS. I think the OOM error has to do with dr_insert_call_instrumentation() or dr_insert_mbr_instrumentation().I made another custom client that does some instruction counting job. Without invoking this function, the client runs normally. But the OOM error pops out when the function is invoked. Do you have insight about this?
--On Sun, Feb 28, 2021 at 11:20 AM 'Derek Bruening' via DynamoRIO Users <dynamor...@googlegroups.com> wrote:--As the docs explain https://dynamorio.org/using.html#sec_64bit_reach, the address space model is inherently limited to 2GB by default for reachable space. There are all kinds of statistics recorded. I would run with `-rstats_to_stderr` to get the key data on what is using which type of memory at the OOM point.On Sat, Feb 27, 2021 at 10:04 PM Zhixuan Huan <huanz...@gmail.com> wrote:After grepping in the source code, I figured this seems to be related with the heap memory, because the error prompt exists in core/heap.c. However, there seems to be no way to change the heap limit of dynamorio(there shouldn't be one in my opinion). At this time, I really have no clue about how to solve this.--On Saturday, February 27, 2021 at 10:00:09 PM UTC-5 Zhixuan Huan wrote:Did anyone run into the OOM issue? I got this when I was running the instcalls.c client.
You received this message because you are subscribed to the Google Groups "DynamoRIO Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/ebc46521-2087-4633-ab90-d265822f5f47n%40googlegroups.com.
You received this message because you are subscribed to a topic in the Google Groups "DynamoRIO Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dynamorio-users/4DiZR1GrIvY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSaBs98p8KBSC5f80HonQ09tJ2rRF8Gs3nRdT31ZWXEORA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "DynamoRIO Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dynamorio-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAGg1uwXGG-PQPk4S4uUiYNfXD%3DCTsGuD%2BXN1kaqChhTq%2BGrHbQ%40mail.gmail.com.
Is this instead a commit limit?
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSZB2UfGpqGaS0xxufq7Nri-DNQ33EEoPExzwn60AEG3iQ%40mail.gmail.com.
Thanks for your reply. I am running dynamorio in console on Linux the whole time. Sorry for the missing information. I was running the older version(7.0 I think), and it did not give me any information on the type of OOM. So I switched to the newest version. This is what I got:Client instrcalls is running
Data file /home/jason/Downloads/github_dynamorio/dynamorio/build/api/bin/instrcalls.a.out.24312.0000.log created
PARSEC Benchmark Suite
Number of Simulations: 40000, Number of threads: 1 Number of swaptions: 64
<Application /home/jason/parsec-llvm-ir/swaptions/a.out (24312). Out of memory. Program aborted. Source C, type 0x0000000000000001, code 0x000000000000000c.>
Regarding the size of the app, this is what I got with bbcount:Instrumentation results:
3451172558 basic block executions
2391 basic blocks needed flag saving
7358 basic blocks did not
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAGg1uwV6pXXvfFmU6HUGr6SjigZ98UvefN5C1DLkOmqOBKzZDQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSbTivDxZMZTGoV-PAKqw5XxgKysxRZ2GypUnMNKrC9Gzw%40mail.gmail.com.
Thanks for the informative reply. I have tried different overcommit settings on my system, one being vm.overcommit_memory = 1(in this case the overcommit ratio shouldn't matter but it is 90 if you ask) and the other being vm.overcommit_memory = 2, vm.overcommit_ratio = 90. Neither works. My system has 64 GiB ram and no swap. There is abundant memory before launching dynamorio(only 3.8 GiB in use). The run will take ~ 8 GiB memory before hitting an OOM error. Below is what I got with the -rstats_to_stderr flag:Peak threads under DynamoRIO control : 1
Threads ever created : 1
System calls, pre : 70
System calls, post : 62
Application mmaps : 27
Application munmaps : 1
Basic block fragments generated : 8073
Trace fragments generated : 441
Peak fcache combined capacity (bytes) : 655360
Peak fcache units on live list : 13
Peak special heap capacity (bytes) : 155648
Peak heap units on live list : 32685
Peak stack capacity (bytes) : 147456
Peak heap capacity (bytes) : 8582631424
Peak total memory from OS (bytes) : 8900859616
Peak vmm blocks for unreachable heap : 1887422
Updated-at-end Process (max is total of maxes) heap breakdown:BB Fragments: cur= 0K, max= 210K, #= 1952, 1= 280, new= 208K, re= 18KCoarse Links: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0KFuture Frag: cur= 0K, max= 18K, #= 2529, 1= 48, new= 18K, re= 41KFrag Tables: cur= 0K, max= 66K, #= 11, 1=33328, new= 74K, re= 16KIBL Tables: cur= 0K, max= 7K, #= 6, 1= 2416, new= 7K, re= 0KTraces: cur= 0K, max= 73K, #= 67, 1=66520, new= 72K, re= 0KFC Empties: cur= 0K, max= 0K, #= 120, 1= 40, new= 0K, re= 8KVm Multis: cur= 0K, max= 1K, #= 2103, 1= 96, new= 3K, re= 144KIR: cur= 0K, max= 32K, #= 66567, 1= 256, new= 46K, re= 3515KRCT Tables: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0KVM Areas: cur= 0K, max= 38K, #= 2947, 1= 4000, new= 33K, re= 347KSymbols: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0KTH Counter: cur= 0K, max= 7K, #= 308, 1= 16, new= 0K, re= 7KTombstone: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0KHot Patching: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0KThread Mgt: cur= 0K, max= 34K, #= 9, 1=33544, new= 32K, re= 2KMemory Mgt: cur= 0K, max= 16K, #= 118, 1= 7248, new= 15K, re= 2KStats: cur= 0K, max= 9K, #= 2, 1= 6944, new= 9K, re= 0KSpecialHeap: cur= 0K, max= 38K, #= 1730, 1= 23, new= 38K, re= 0KClient: cur= 0K, max= 149K, #= 12947, 1=35944, new= 151K, re= 677KLib Dup: cur= 159K, max= 4174K, #= 16173, 1=1887K, new= 4174K, re= 2524KClean Call: cur= 0K, max= 0K, #= 365, 1= 200, new= 0K, re= 25KOther: cur= 0K, max= 70K, #= 416, 1=27000, new= 67K, re= 8KTotal cur usage: 159 KBTotal max (not nec. all used simult.): 4951 KB
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAGg1uwXD-%2BZ3eh3RW_Uw25Z%3D6jnB5XxLWdcRWjLOt-T4NLWbmA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAO1ikSaAUsMJkORWkTZzyM48-Xjw%2BMiETto63AW%3Dia-P4Ti78w%40mail.gmail.com.
Yes I am pretty sure I'm running the original instrcalls.c that comes with the source code tar ball. Here is the section in the log file:Updated-at-end Process (max is total of maxes) heap breakdown:
BB Fragments: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Coarse Links: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Future Frag: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Frag Tables: cur= 1K, max= 1K, #= 3, 1= 288, new= 1K, re= 0K
IBL Tables: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Traces: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
FC Empties: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Vm Multis: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
IR: cur= 0K, max= 9K, #= 1523, 1= 104, new= 13K, re= 84K
RCT Tables: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
VM Areas: cur= 41K, max= 41K, #=2227653, 1= 4000, new= 36K, re=156615K
Symbols: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
TH Counter: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Tombstone: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Hot Patching: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Thread Mgt: cur= 32K, max= 32K, #= 2, 1=32768, new= 32K, re= 0K
Memory Mgt: cur= 11K, max= 11K, #= 141, 1= 5032, new= 9K, re= 2K
Stats: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
SpecialHeap: cur= 0K, max= 0K, #= 0, 1= 0, new= 0K, re= 0K
Client: cur= 0K, max= 0K, #=5176703, 1= 88, new= 1K, re=202213K
Lib Dup: cur=8374109K, max=8374140K, #=66900246, 1=9328K, new=8374169K, re=45583975K
To view this discussion on the web visit https://groups.google.com/d/msgid/dynamorio-users/CAGg1uwVCqU26084M%3D%3D1JS3f0SjO-dBHqtQFA5sU_6nzS2RrkVQ%40mail.gmail.com.