Mutex M... is already destroyed

666 views
Skip to first unread message

Francisco Moraes

unread,
Jan 7, 2022, 6:47:58 AM1/7/22
to thread-sanitizer
I am adding TSAN support to our product. It is working great but one issue I am running into is the message:

Mute Mxxxxxxx is already destroyed.

I have instrumented our different mutex implementations with TSAN annotations but I have not figured out how to fix/avoid the above message. 

What I believe happens is that a mutex is created and acquired by the main thread (or another thread). This mutex is then destroyed. The memory is reused by another mutex which is created. This new mutex doesn't seem to be properly registered as a new mutex, even though it has a new id because the other mutex is still known by TSAN.

It seems to be that MetaMap::GetAndLock doesn't reset the destroyed mutex and the new mutex is not properly registered into the map.

When a data race occurs, the main thread is listed as owning the destroyed mutex.

Any suggestions on how to improve/fix this to avoid these messages which would be confusing to our developers?

Francisco

Dmitry Vyukov

unread,
Jan 7, 2022, 8:42:07 AM1/7/22
to Francisco Moraes, thread-sanitizer
Hi Francisco,

Is the problem this exact message, or some false reports that include
this message?

While you correctly note that MetaMap::GetAndLock doesn't reset the
object, it should be completely reset in MutexDestroy annotation:
https://github.com/llvm/llvm-project/blob/21babe4db326a4bbac2e317ad50e4f62643e4a1d/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp#L127

I think I saw some spurious "is already destroyed" messages. The
previous implementation was somewhat messy in that area.
But this area was reworked recently and this particular message won't
appear anymore (removed from the code entirely):
https://github.com/llvm/llvm-project/commit/b332134921b42796c6b46453eaf2affdc09e3154
https://github.com/llvm/llvm-project/commit/52a4a4a53c3ebffe474802dc87cd61a38e1783b5

Does your problem appear with the latest clang? FWIW this new runtime
is better in a number of other aspects.

Francisco Moraes

unread,
Jan 7, 2022, 11:56:30 AM1/7/22
to thread-sanitizer
Hi Dmitry,

Thanks for the reply. I forgot to mention this was experience while using GCC 9.3.1. Unfortunately our code doesn't compile easily with Clang, so I cannot just try it. I can try to use a newer GCC and see if that improves the result.

I am not 100% sure of the scenario that causes it, the following code does generate a message about the destroyed mutex (not 100% match to the first case our code hits but it was the only way I could force the messge):

```
#include <thread>

#include <sanitizer/tsan_interface.h>

using namespace std;

bool flag;

int memory = 0;

void f()
{
    flag = true;
}

int main(int argc, char **argv)
{
    void *_lk = &memory;
    __tsan_mutex_create(_lk, __tsan_mutex_not_static);
   
    __tsan_mutex_pre_lock(_lk, __tsan_mutex_try_lock | __tsan_mutex_not_static);
    __tsan_mutex_post_lock(_lk, __tsan_mutex_try_lock | __tsan_mutex_not_static, 0);

//    __tsan_mutex_pre_unlock(_lk, __tsan_mutex_not_static);
//    __tsan_mutex_post_unlock(_lk, __tsan_mutex_not_static);
   
    __tsan_mutex_destroy(_lk, __tsan_mutex_not_static);
   
    void *lock = malloc(32);
    __tsan_mutex_create(lock, __tsan_mutex_not_static);

    __tsan_mutex_create(&memory, __tsan_mutex_not_static);
    __tsan_mutex_pre_lock(&memory, __tsan_mutex_not_static);
    __tsan_mutex_post_lock(&memory, __tsan_mutex_not_static, 0);

    flag = false;

    thread t1(f);

    while(!flag);
}
```
Tested with both GCC 9.3.1 (g++ (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)) and Clang 13.0.0 (Apple clang version 13.0.0 (clang-1300.0.29.30))

Compiled with

 clang++ -fsanitize=thread -fno-common -o tsan2 tsan2.cpp

Output:

```

tsan2(44758,0x10a8fb600) malloc: nano zone abandoned due to inability to preallocate reserved vm space.

==================

WARNING: ThreadSanitizer: destroy of a locked mutex (pid=44758)

    #0 __tsan_mutex_destroy <null>:3 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x48c9b)

    #1 main <null>:2 (tsan2:x86_64+0x1000026f0)


  and:

    #0 __tsan_mutex_post_lock <null>:3 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x48dc2)

    #1 main <null>:2 (tsan2:x86_64+0x1000026dd)


  Location is global 'memory' at 0x0001056e80dc (tsan2+0x0001000080dc)


  Mutex M20 (0x0001056e80dc) created at:

    #0 __tsan_mutex_create <null>:3 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x48c1b)

    #1 main <null>:2 (tsan2:x86_64+0x1000026b5)


SUMMARY: ThreadSanitizer: destroy of a locked mutex (tsan2:x86_64+0x1000026f0) in main+0x80

==================

==================

WARNING: ThreadSanitizer: data race (pid=44758)

  Write of size 1 at 0x0001056e80d8 by thread T1:

    #0 f() <null>:2 (tsan2:x86_64+0x100002658)

    #1 void* std::__1::__thread_proxy_cxx03<std::__1::__thread_invoke_pair<void (*)()> >(void*) <null>:2 (tsan2:x86_64+0x100002b9f)


  Previous read of size 1 at 0x0001056e80d8 by main thread (mutexes: write M5629503920308444, write M0):

    #0 main <null>:2 (tsan2:x86_64+0x1000027a3)


  Location is global 'flag' at 0x0001056e80d8 (tsan2+0x0001000080d8)


  Mutex M5629503920308444 is already destroyed.


  Mutex M0 (0x0001056e80dc) created at:

    #0 __tsan_mutex_create <null>:3 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x48c1b)

    #1 main <null>:2 (tsan2:x86_64+0x100002737)


  Thread T1 (tid=1130742, running) created by main thread at:

    #0 pthread_create <null>:3 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x2ca2d)

    #1 std::__1::__libcpp_thread_create(_opaque_pthread_t**, void* (*)(void*), void*) <null>:2 (tsan2:x86_64+0x100002af5)

    #2 std::__1::thread::thread<void (*)()>(void (*)()) <null>:2 (tsan2:x86_64+0x1000028f8)

    #3 std::__1::thread::thread<void (*)()>(void (*)()) <null>:2 (tsan2:x86_64+0x100002835)

    #4 main <null>:2 (tsan2:x86_64+0x10000278d)


SUMMARY: ThreadSanitizer: data race (tsan2:x86_64+0x100002658) in f()+0x18

==================

```

Dmitry Vyukov

unread,
Jan 7, 2022, 12:23:12 PM1/7/22
to Francisco Moraes, thread-sanitizer
On Fri, 7 Jan 2022 at 17:56, Francisco Moraes
<francisc...@gmail.com> wrote:
>
> Hi Dmitry,
>
> Thanks for the reply. I forgot to mention this was experience while using GCC 9.3.1. Unfortunately our code doesn't compile easily with Clang, so I cannot just try it. I can try to use a newer GCC and see if that improves the result.

The changes may be already incorporated into gcc, but at this point
you will need to build gcc at HEAD. But older versions are generally
not supported.


> I am not 100% sure of the scenario that causes it, the following code does generate a message about the destroyed mutex (not 100% match to the first case our code hits but it was the only way I could force the messge):

This looks like it is working as intended. The thread indeed holds a
mutex that is already destroyed so tsan couldn't provide more info
about it.


> ```
> --
> You received this message because you are subscribed to the Google Groups "thread-sanitizer" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to thread-sanitiz...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/thread-sanitizer/bdbbe4b5-76df-4b5e-96c2-20468c74388fn%40googlegroups.com.

Francisco Moraes

unread,
Jan 7, 2022, 1:07:28 PM1/7/22
to thread-sanitizer
Like I mentioned, this was not 100% like our use case. The interesting thing is that the message doesn't always show up, indicating some concurrency issue. Our scenario doesn't have the mutex locked when it is being destroyed but there are too many other things happening on the process to recreate. I was able to track the mutex id before I went on vacation by placing breakpoints after it got the id assigned. I recalled stepping through some code that decided the id's were different and then didn't properly register the new mutex in the map, leaving the old destroyed one in place.

I will attempt with the latest GCC I can use without having to compile it myself and report next week.

Dmitry Vyukov

unread,
Jan 7, 2022, 1:20:05 PM1/7/22
to Francisco Moraes, thread-sanitizer
On Fri, 7 Jan 2022 at 19:07, Francisco Moraes
<francisc...@gmail.com> wrote:
>
> Like I mentioned, this was not 100% like our use case.

Well, I analysed what you provided. I don't have anything else.

> The interesting thing is that the message doesn't always show up, indicating some concurrency issue. Our scenario doesn't have the mutex locked when it is being destroyed but there are too many other things happening on the process to recreate. I was able to track the mutex id before I went on vacation by placing breakpoints after it got the id assigned. I recalled stepping through some code that decided the id's were different and then didn't properly register the new mutex in the map, leaving the old destroyed one in place.
>
> I will attempt with the latest GCC I can use without having to compile it myself and report next week.

Unfortunately it does not seem to reach gcc yet:
https://github.com/gcc-mirror/gcc/commits/master/libsanitizer/tsan
> To view this discussion on the web visit https://groups.google.com/d/msgid/thread-sanitizer/f467a53f-ad9a-441c-bc93-15836cb3d0a6n%40googlegroups.com.

Francisco Moraes

unread,
Jan 10, 2022, 2:33:17 PM1/10/22
to thread-sanitizer
Is it possible to compile my own TSAN with the new code ? Just wondering if there is anything I can do to try to work around these messages.

Francisco

Dmitry Vyukov

unread,
Jan 11, 2022, 7:58:27 AM1/11/22
to Francisco Moraes, thread-sanitizer
On Mon, 10 Jan 2022 at 20:33, Francisco Moraes
<francisc...@gmail.com> wrote:
>
> Is it possible to compile my own TSAN with the new code ? Just wondering if there is anything I can do to try to work around these messages.

I know how to do it for clang, but I don't know for gcc.
If it would be integrated into gcc git, then you would just need to
build a new gcc.
I don't know what's the procedure of integrating it into gcc.
There was this recent thread about it:
https://gcc.gnu.org/pipermail/gcc/2021-December/237957.html
gcc developers want to do it, so first I would ping them.
> To view this discussion on the web visit https://groups.google.com/d/msgid/thread-sanitizer/965fb4c5-93c8-468d-8eb9-a95edb0ca114n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages