Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

On converting "new" to "new(std::nothrow)"

9 views
Skip to first unread message

Peter Weilbacher

unread,
May 23, 2007, 5:12:00 AM5/23/07
to
This post is following up on discussions in
https://bugzilla.mozilla.org/show_bug.cgi?id=353144
where it was suggested to post in the newsgroups to attract more
attention to the problem and discuss possible solutions more in
the open.

(Most) code in the Mozilla project is written under the assumption
that the C++ operators new and new[] return NULL on failure (when
memory allocation fails). This is actually not the case, or at
least not for GCC compiled code. What happens is that the bad_alloc
exception is thrown. As this is never checked for in the code this
will lead to crashes.

The discussions in bug 353144 contained several suggestions to
work around this:

- simply convert all new and new[] to new(std::nothrow) and
use a macro to warn when an undecorated new is used (works
with GCC only?) The latest patch for this is in
https://bugzilla.mozilla.org/attachment.cgi?id=250515

- use new macros (NS_NEW) to do the same. This was basically
rejected in the discussions. (comments 57 and 58)

- overload new and new[] using some C++ magic (comments 12 and
https://bugzilla.mozilla.org/attachment.cgi?id=239588 tried
to do this)

- Try to get GCC to implement an option to return NULL or make
it part of -fno-exceptions (comments 13, 25, 27 and following)

Follow-up set to .platform.

Peter.

Benjamin Smedberg

unread,
May 23, 2007, 8:39:42 AM5/23/07
to
Peter Weilbacher wrote:

> - simply convert all new and new[] to new(std::nothrow) and
> use a macro to warn when an undecorated new is used (works
> with GCC only?) The latest patch for this is in
> https://bugzilla.mozilla.org/attachment.cgi?id=250515

Without additional steps that ensure people don't reintroduce new(), this
sounds fragile from a maintenance perspective (in addition to being a lot of
source churn that should be avoided if possible).

> - overload new and new[] using some C++ magic (comments 12 and
> https://bugzilla.mozilla.org/attachment.cgi?id=239588 tried
> to do this)

Does this patch work? This is IMO the best solution.

--BDS

Jonas Sicking

unread,
May 23, 2007, 3:15:27 PM5/23/07
to
Also note that you are doing this only for the 1.9 branch. Once we
release based on mozilla 2 we will actually want for |new| to throw.

So I'm not really sure that this is worth the effort.

/ Jonas

Message has been deleted
Message has been deleted

Benjamin Smedberg

unread,
May 23, 2007, 5:11:39 PM5/23/07
to
Peter Weilbacher wrote:

> I guess it did solve the problem on Linux (where I never tested it). But
> on other platforms it caused either compilation problems (on Solaris,
> see comment 53 in the bug) or only solves half of the problem as new[]
> does not get overloaded on older GCCs (on OS/2 at least and probably
> also on BeOS).

If you mean GCC 2.9x, we have dropped support for those compilers (they
don't work at all), and can be discounted. For Solaris, can't we ifdef away
the overloaded operators?

--BDS

Jonas Sicking

unread,
May 23, 2007, 5:33:13 PM5/23/07
to
Peter Weilbacher wrote:

> On Wed, 23 May 2007 19:15:27 UTC, Jonas Sicking wrote:
>
>> Also note that you are doing this only for the 1.9 branch.
>
> I am actually not doing much, it was Mats who did the work. I am just
> interested to reduce the number of crashes that my OS/2 users complain
> about.

Honestly, I don't think you'll reduce the number of crashes much. As
hard as we try to deal with running out of memory, i strongly doubt
we're doing a very good job at it. Once you are out of memory you are in
pretty deep trouble no matter what.

Even if we are able to deal with allocations failing a few times,
eventually you are going to hit a place where we don't deal with it well.

Just null checking is often far from enough. You have to make sure that
the calling code deals well with function failing, and that we don't
fail from the allocated object missing, and so on.

Take strings for example, they deal just fine with running out of
memory, the attempted modification will simply fail. But can all code
that uses strings really deal with any string modification failing?

>> Once we
>> release based on mozilla 2 we will actually want for |new| to throw.
>>
>> So I'm not really sure that this is worth the effort.
>

> Interesting new point. So how will it be accomplished to not crash in
> Mozilla 2? Will all code get rewritten to catch exceptions?

Yes.

/ Jonas

Message has been deleted
Message has been deleted

Jonas Sicking

unread,
May 23, 2007, 7:12:33 PM5/23/07
to
Peter Weilbacher wrote:

> On Wed, 23 May 2007 21:33:13 UTC, Jonas Sicking wrote:
>
>> Honestly, I don't think you'll reduce the number of crashes much. As
>> hard as we try to deal with running out of memory, i strongly doubt
>> we're doing a very good job at it. Once you are out of memory you are in
>> pretty deep trouble no matter what.
>>
>> Even if we are able to deal with allocations failing a few times,
>> eventually you are going to hit a place where we don't deal with it well.
>
> I agree, partly. In the builds that I create for OS/2 I changed some
> specific parts connected to allocating large images to use nothrow. If
> those images cannot be allocated that creates a visible effect.
> Attentive users are then alerted, can save their work and restart the
> browser or close tabs and windows safely before the actual crash occurs.
> Maybe it was just a lucky coincidence that I did that change in a piece
> of code where return values are checked far enough. But perhaps it's not
> as hopeless as you think it is...

Yeah, your case is a one-off I think, and I'd be fine with addressing
that one specifically.

I usually say whenever this discussion comes up that while checking
everywhere is most likely not worth it, it is a good idea to check in
the few spots where we do make large allocations. Images are a good
example of this. The reason for this is two fold, A) since it's just a
few spots we can make sure to deal properly with them, and B) since it's
a large allocation failing, it is likely that following small ones could
still succeed, and we'd remain stable.

/ Jonas

0 new messages