Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

are new delete operators thread safe?

49 views
Skip to first unread message

asetof...@gmail.com

unread,
Oct 28, 2015, 3:14:27 AM10/28/15
to
are new delete operators thread safe?

Paavo Helde

unread,
Oct 28, 2015, 3:20:53 AM10/28/15
to
asetof...@gmail.com wrote in news:48c66481-127a-4d3c-a0da-9de728486124
@googlegroups.com:

> are new delete operators thread safe?
>

yes

Juha Nieminen

unread,
Oct 28, 2015, 4:06:25 AM10/28/15
to
It's actually one of the reasons why they are so slow.

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Paavo Helde

unread,
Oct 28, 2015, 1:18:38 PM10/28/15
to
Juha Nieminen <nos...@thanks.invalid> wrote in
news:n0pvll$93o$2...@adenine.netfront.net:

> Paavo Helde <myfir...@osa.pri.ee> wrote:
>> asetof...@gmail.com wrote in
>> news:48c66481-127a-4d3c-a0da-9de728486124 @googlegroups.com:
>>
>>> are new delete operators thread safe?
>>>
>>
>> yes
>
> It's actually one of the reasons why they are so slow.

Yes, better safe than sorry. I'm not actually sure what kind of thread
safety the OP meant. Maybe he was just concerned if one can call new in
two different threads at the same time. This is of cource guaranteed,
otherwise we would need to put a mutex lock around each string+=char
operation.

For allocators, the next level of thread safety means that the memory
allocated in one thread can be released in another. Without this, any
inter-thread communication would become extremely fragile and cumbersome.
One could not legally call e.g. string.c_str() in a wrong thread. So
there are good reasons why the default allocator must provide such thread
safety.

However, there is no inherent reason that thread-safety must mean
slowness. For example, there are general allocator libraries like Intel
TBB with drastically better performance than the default MSVC one, at
least in massive multithreading regime and at least some years ago (have
not bothered to check recently).

Of course, dynamically allocating something will always be slower than
not allocating. Here C++ has an edge over many competitors as it can
easily place complex objects on stack, in a buffer of std::vector, etc.

Cheers
Paavo

Marcel Mueller

unread,
Oct 28, 2015, 3:15:04 PM10/28/15
to
On 28.10.15 09.06, Juha Nieminen wrote:
>>> are new delete operators thread safe?
>>
>> yes
>
> It's actually one of the reasons why they are so slow.

Well, it should be possible to have implementations that are mainly
lock-free by simply associating smaller memory pools with threads. So if
new and delete is called from the same thread, which is not that
uncommon for short lived objects, no lock is required. And even if the
object belongs to another thread's pool it might be sufficient to place
a marker in a lock free FIFO that is executed by the other thread at the
next new/delete call. The memory in the pool of the other thread cannot
be reused unless an allocation comes from this thread anyway. Of course,
this should not be done for large objects. And the pool should have a
lock free allocation counter so that the entire pool can be disposed
once it gets empty.


Marcel

Scott Lurndal

unread,
Oct 28, 2015, 4:06:23 PM10/28/15
to
At least on the linux side, the thread-safe allocators are only
slow when there is contention. If one is allocating frequently
enough that the locks are frequently contented, then perhaps the
application could be refactored to avoid such frequent allocation.

Hard to do when using STL, however.

Chris M. Thomasson

unread,
Oct 28, 2015, 4:57:47 PM10/28/15
to
> "Scott Lurndal" wrote in message news:PU9Yx.41512$Iw3....@fx30.iad...

[...]

> At least on the linux side, the thread-safe allocators are only
> slow when there is contention. If one is allocating frequently
> enough that the locks are frequently contented, then perhaps the
> application could be refactored to avoid such frequent allocation.

AFAICT, there is no "real need" for locks in the "hot spots" of allocators.
0 new messages