Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

initial size of a std::vector of std::strings

88 views
Skip to first unread message

Lynn McGuire

unread,
Dec 7, 2022, 12:42:49 AM12/7/22
to
Is there a way to set the size of std::vector <std::string> formulas to
max_ncp size ?

Something like "std::vector <std::string> formulas [max_ncp];"

Thanks,
Lynn

James Kuyper

unread,
Dec 7, 2022, 12:52:31 AM12/7/22
to
23.2.6.1:
explicit vector(size_type n);
5 Effects: Constructs a vector with n default constructed elements.
6 Requires: T shall be DefaultConstructible.
7 Complexity: Linear in n.

Bonita Montero

unread,
Dec 7, 2022, 2:17:28 AM12/7/22
to
If you want to seqentially fill up the vector with your data up to the
size you allocated before declare an empty-sized vector<string>, do a
vec.reserve( max_ncp ) and have an according number of emplace_back
to that vector afterwards:

vector<string> vStr;
vStr.reserve( max_ncp );
for( size_t i = 0; i != max_ncp; ++i )
vStr.emplace_back( ... );

Öö Tiib

unread,
Dec 7, 2022, 3:23:49 AM12/7/22
to
When that max_ncp is compile-time known immutable value then
std::array<std::string, max_ncp> can be what you want.

Maybe worth to note that I've seen reserve(n) of std::vector overused
in practice as bad pessimization. Vector is typically designed so that
we get log(max) allocations and max times element copies because
of vector just growing freely during its life-time. That can be turned into
way worse by someone micromanaging it.

Juha Nieminen

unread,
Dec 7, 2022, 4:13:29 AM12/7/22
to
Öö Tiib <oot...@hot.ee> wrote:
> Maybe worth to note that I've seen reserve(n) of std::vector overused
> in practice as bad pessimization. Vector is typically designed so that
> we get log(max) allocations and max times element copies because
> of vector just growing freely during its life-time. That can be turned into
> way worse by someone micromanaging it.

The typical std::vector implementation doubles its capacity every time it
needs to grow.

Now consider what happens if you are adding, say, at least a thousand
elements to a vector (using push_back() or emplace_back()): It will do
ten reallocations pretty quickly in succession, every time it has to
double its capacity. While the frequency at which it's doing these
reallocations is decrementing exponentially, at the beginning they
are very frequent, and there's literally no advantage in doing them.

If you know that you will be adding at least a thousand or so
elements, doing an initial reserve(1024) will remove *all* of those
useless initial ten reallocations.

Avoiding the useless reallocations is not just a question of speed,
but also of avoiding memory fragmentation. The less dynamic memory
allocations you do, the better (in terms of both).

Alf P. Steinbach

unread,
Dec 7, 2022, 6:29:19 AM12/7/22
to
The most reasonable interpretation to me is that you're asking how to
set the initial /capacity/ of the vector in the declaration, to avoid an
explicit .reserve() invocation as a separate following statement.

Presumably because you declare such vectors in many places.

To do that you can to wrap the vector, either via a factory function or
via a simple class, like

struct Capacity{ int value; };

struct Fast_stringvec:
public vector<string>
{
template< class... Args >
Fast_stringvec( const Capacity capacity, Args&&... args ):
vector<string>( forward<Args>( args )... )
{
reserve( capacity.value );
}
};

Then do e.g.

auto formulas = Fast_stringvec( Capacity( max_ncp ) );

... where I believe the use of `auto` helps avoid the Most Vexing Parse. ;-)

If `max_ncp` is a constant you can add a constructor that uses that by
default.

---

A not so reasonable interpretation is that you're asking how to make the
vector contain `max_ncp` strings initially, for which you can just do

vector<string> formulas( max_ncp, ""s );

---

Crossing the line to perhaps unreasonable interpretation is that you're
asking how you can make it so that when you add a new to string to the
vector, the new string item will guaranteed be of size `max_ncp`.

For that you need to make the item type one that holds a `string` of
exactly that size.

It would probably be necessary to take control of copying to such items,
to avoid inadvertent size changes.


- Alf

Öö Tiib

unread,
Dec 7, 2022, 9:04:03 AM12/7/22
to
On Wednesday, 7 December 2022 at 11:13:29 UTC+2, Juha Nieminen wrote:
> Öö Tiib <oot...@hot.ee> wrote:
> > Maybe worth to note that I've seen reserve(n) of std::vector overused
> > in practice as bad pessimization. Vector is typically designed so that
> > we get log(max) allocations and max times element copies because
> > of vector just growing freely during its life-time. That can be turned into
> > way worse by someone micromanaging it.
>
> The typical std::vector implementation doubles its capacity every time it
> needs to grow.
>
> Now consider what happens if you are adding, say, at least a thousand
> elements to a vector (using push_back() or emplace_back()): It will do
> ten reallocations pretty quickly in succession, every time it has to
> double its capacity. While the frequency at which it's doing these
> reallocations is decrementing exponentially, at the beginning they
> are very frequent, and there's literally no advantage in doing them.
>
> If you know that you will be adding at least a thousand or so
> elements, doing an initial reserve(1024) will remove *all* of those
> useless initial ten reallocations.

That is is all trivially true. One reserve() call is not "micromanaging" so
you missed my point. What I described can be achieved by calling
reserve more than once for example in 1024 element steps for vector
that grows to million elements. Result is that you get like 50 times
more allocations and element copies. It does replace first 10
allocations with one but next 10 with 1000.

> Avoiding the useless reallocations is not just a question of speed,
> but also of avoiding memory fragmentation. The less dynamic memory
> allocations you do, the better (in terms of both).

Yes, the misused reserve calls can also fragment the memory far worse
as the less dynamic memory allocations you do, the better.

Paavo Helde

unread,
Dec 7, 2022, 9:34:17 AM12/7/22
to
07.12.2022 11:13 Juha Nieminen kirjutas:
> Öö Tiib <oot...@hot.ee> wrote:
>> Maybe worth to note that I've seen reserve(n) of std::vector overused
>> in practice as bad pessimization. Vector is typically designed so that
>> we get log(max) allocations and max times element copies because
>> of vector just growing freely during its life-time. That can be turned into
>> way worse by someone micromanaging it.
>
> The typical std::vector implementation doubles its capacity every time it
> needs to grow.

Not any more. Nowadays a factor like 1.5 is more popular, meaning
something like 18 allocations for 1000 push_back()-s. This is from MSVC++19:

std::vector<int, my_allocator<int>> v;
for (int i = 0; i < 1000; ++i) {
v.push_back(i);
}


Allocating 1 elements
Allocating 1 elements
Allocating 2 elements
Allocating 3 elements
Allocating 4 elements
Allocating 6 elements
Allocating 9 elements
Allocating 13 elements
Allocating 19 elements
Allocating 28 elements
Allocating 42 elements
Allocating 63 elements
Allocating 94 elements
Allocating 141 elements
Allocating 211 elements
Allocating 316 elements
Allocating 474 elements
Allocating 711 elements
Allocating 1066 elements

Bonita Montero

unread,
Dec 7, 2022, 9:37:25 AM12/7/22
to
Am 07.12.2022 um 15:03 schrieb Öö Tiib:

> That is is all trivially true. One reserve() call is not "micromanaging"
> so you missed my point. What I described can be achieved by calling
> reserve more than once for example in 1024 element steps for vector
> that grows to million elements. Result is that you get like 50 times
> more allocations and element copies. It does replace first 10
> allocations with one but next 10 with 1000.


For each append where the capacity grows the vector's capacity is
doubled with libstdc++ and libc++. With MSVC the capacity grows in
50%-increments. So the number of resizes actually isn't so huge.
Java does it the same way and stores the data of an ArrayList also
in a normal array that grows by 100% if the capacity is exhausted.

> Yes, the misused reserve calls can also fragment the memory ...

With moderen allocators like mimalloc, jemalloc and tcmalloc external
fragementation isn't actually an issue since there's no external frag-
mentation up to a size of two pages (but an average internal fragmen-
tation of 25%) and beyond that the memory blocks grow in page incre-
ments and each page can easily be deallocated so that external frag-
mentation doesn't hurt much.



Bonita Montero

unread,
Dec 7, 2022, 9:38:31 AM12/7/22
to
Am 07.12.2022 um 15:34 schrieb Paavo Helde:

> Not any more. Nowadays a factor like 1.5 is more popular, ...

Only MSVC uses 50% increments to satisfy the amortized constant
overhead constraint while inserting. libstdc++ and libc++ use a
100% increment.


Öö Tiib

unread,
Dec 7, 2022, 12:33:53 PM12/7/22
to
On Wednesday, 7 December 2022 at 16:37:25 UTC+2, Bonita Montero wrote:
> Am 07.12.2022 um 15:03 schrieb Öö Tiib:
>
> > That is is all trivially true. One reserve() call is not "micromanaging"
> > so you missed my point. What I described can be achieved by calling
> > reserve more than once for example in 1024 element steps for vector
> > that grows to million elements. Result is that you get like 50 times
> > more allocations and element copies. It does replace first 10
> > allocations with one but next 10 with 1000.
> For each append where the capacity grows the vector's capacity is
> doubled with libstdc++ and libc++. With MSVC the capacity grows in
> 50%-increments. So the number of resizes actually isn't so huge.

Yes that was my point that the logarithm of base 2 of million is
about 20 and logarithm of base 1.5 of million is about 28. Neither
is large number and so default behaviour of mainstream standard
library vector implementations is reasonable.

Application programmer is in my experience rather capable of
being unreasonable and micromanaging and reserving linearly.
So when suggesting manual reserve() then it is always worth to
warn that it is double edged sword not some kind of magic tool.

One example of common naive usage of reserve() is instead of
plain insert() of range (that grows exponentially if needed) is
checking how lot more storage is needed, reserving for it and
then adding the elements. Other tool that is on about 9/10
cases pessimally used is shrink_to_fit().

Bonita Montero

unread,
Dec 7, 2022, 12:45:32 PM12/7/22
to
Am 07.12.2022 um 18:33 schrieb Öö Tiib:

> Application programmer is in my experience rather capable of
> being unreasonable and micromanaging and reserving linearly.
> So when suggesting manual reserve() then it is always worth to
> warn that it is double edged sword not some kind of magic tool.

reserve isn't hard to use and it is used if you know the number
of inserted items. So I don't believe that a misuse of reserve()
is common.

> One example of common naive usage of reserve() is instead of
> plain insert() of range (that grows exponentially if needed) is
> checking how lot more storage is needed, reserving for it and
> then adding the elements. ...

That may sense if you have a really large vector that isn't backed
by a memory pool but allocated directly from the kernel and given
back to the kernel when being freed becaus of the size of the allo-
cation. If that happens the allocated pages aren't actually commit-
ted until you touch them and on some systems they're even not sub-
tracted from swap on allocation (overcommit).

Lynn McGuire

unread,
Dec 7, 2022, 3:54:23 PM12/7/22
to
I am trying to declare a vector with max_ncp strings in a struct.
struct component_data {
std::vector <std::string> component_names [max_ncp];
doublereal heatoffusionatmeltingpoint [max_ncp];
doublereal meltingpointtemperature [max_ncp];
doublereal sublimationtemperature [max_ncp];
doublereal triplepointtemperature [max_ncp];
doublereal triplepointpressure [max_ncp];
};

Visual Studio 2015 does not like
std::vector <std::string> component_names (max_ncp);
or
std::vector <std::string> component_names (max_ncp, "");

But it does like
std::vector <std::string> component_names [max_ncp];
But it does not work.

Thanks,
Lynn

daniel...@gmail.com

unread,
Dec 7, 2022, 4:16:37 PM12/7/22
to
On Wednesday, December 7, 2022 at 3:54:23 PM UTC-5, Lynn McGuire wrote:

> I am trying to declare a vector with max_ncp strings in a struct.
> struct component_data {
> std::vector <std::string> component_names [max_ncp];
> doublereal heatoffusionatmeltingpoint [max_ncp];
> doublereal meltingpointtemperature [max_ncp];
> doublereal sublimationtemperature [max_ncp];
> doublereal triplepointtemperature [max_ncp];
> doublereal triplepointpressure [max_ncp];
> };
>
> Visual Studio 2015 does not like
> std::vector <std::string> component_names (max_ncp);
> or
> std::vector <std::string> component_names (max_ncp, "");
>
> But it does like
> std::vector <std::string> component_names [max_ncp];
> But it does not work.
>

Is this what you want?

struct component_data {
const std::size_t max_ncp = 10;

std::vector <std::string> component_names;

component_data()
: component_names(max_ncp)
{}
};

VS 2015 is fine with that.

Daniel

Alf P. Steinbach

unread,
Dec 7, 2022, 4:33:57 PM12/7/22
to
On 7 Dec 2022 21:54, Lynn McGuire wrote:
>
> I am trying to declare a vector with max_ncp strings in a struct.
> struct component_data {
>     std::vector <std::string> component_names [max_ncp];
>     doublereal heatoffusionatmeltingpoint [max_ncp];
>     doublereal meltingpointtemperature [max_ncp];
>     doublereal sublimationtemperature [max_ncp];
>     doublereal triplepointtemperature [max_ncp];
>     doublereal triplepointpressure [max_ncp];
> };
>
> Visual Studio 2015 does not like
>        std::vector <std::string> component_names (max_ncp);
> or
>     std::vector <std::string> component_names (max_ncp, "");
>
> But it does like
>     std::vector <std::string> component_names [max_ncp];
> But it does not work.

It seems to me that what you're trying to do is simply

struct Components_data
{
int n_components;
std::string name [max_ncp];
double heat_of_fusion_at_melting_point [max_ncp];
double melting_point_temperature [max_ncp];
double sublimation_temperature [max_ncp];
double triple_point_temperature [max_ncp];
double triple_point_pressure [max_ncp];
};

But with `std::string` involved this is not a memory layout of a Fortran
structure.

So, do consider whether this can serve your requirements instead:

struct Component
{
std::string name;
double heat_of_fusion_at_melting_point;
double melting_point_temperature;
double sublimation_temperature;
double triple_point_temperature;
double triple_point_pressure;
};

using Components_data = std::vector<Component>;

Depending on the processing you do this might be slower or faster, needs
measuring if that's important.

It /is/ however IMO very likely much more convenient and easy to use
correctly.

- Alf

Michael S

unread,
Dec 7, 2022, 4:34:30 PM12/7/22
to
Works fine on godbolt's vs2015

https://godbolt.org/z/8WPo33Yn1

Bonita Montero

unread,
Dec 7, 2022, 4:37:42 PM12/7/22
to
Sorry, this Lady is writing code with several hundered
thousand lines of code and has issues with such things ... ???

Keith Thompson

unread,
Dec 7, 2022, 4:49:58 PM12/7/22
to
Lynn McGuire <lynnmc...@gmail.com> writes:
[...]
> I am trying to declare a vector with max_ncp strings in a struct.
> struct component_data {
> std::vector <std::string> component_names [max_ncp];
> doublereal heatoffusionatmeltingpoint [max_ncp];
> doublereal meltingpointtemperature [max_ncp];
> doublereal sublimationtemperature [max_ncp];
> doublereal triplepointtemperature [max_ncp];
> doublereal triplepointpressure [max_ncp];
> };
>
> Visual Studio 2015 does not like
> std::vector <std::string> component_names (max_ncp);
> or
> std::vector <std::string> component_names (max_ncp, "");

How does it express its dislike? Both work for me:

#include <vector>
#include <string>
#include <iostream>

int main() {
int max_ncp = 42;
{
std::vector<std::string> component_names(max_ncp);
std::cout << component_names.size() << ' ';
}
{
std::vector<std::string> component_names(max_ncp, "");
std::cout << component_names.size() << '\n';
}
}

The output is "42 42".

> But it does like
> std::vector <std::string> component_names [max_ncp];
> But it does not work.

That defines an array (not a std::array, just a C-style array object) of
max_ncp vectors. I think you want a single vector.

Do you want to set the initial size or the initial capacity?

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for XCOM Labs
void Void(void) { Void(); } /* The recursive call of the void */

Lynn McGuire

unread,
Dec 7, 2022, 5:01:50 PM12/7/22
to
One vector with max_ncp strings in it.

Thanks,
Lynn

Lynn McGuire

unread,
Dec 7, 2022, 5:28:28 PM12/7/22
to
Now I am wondering if I should be using std::array instead of std::vector.

Thanks,
Lynn

Bo Persson

unread,
Dec 7, 2022, 5:46:12 PM12/7/22
to
You are not allowed to use round parenthesis in default member
initializers, because of possible confusion with function declarations
("Most vexing parse").

Like

std::vector <std::string> component_names ();

is a member function returning a vector, not a default initialized variable.

> or
>     std::vector <std::string> component_names (max_ncp, "");
>
> But it does like
>     std::vector <std::string> component_names [max_ncp];
> But it does not work.

Define "work". :-)

This creates an array of vectors of strings. Probably not what you want,
anyway.


You CAN use an {} initializer for the vector

std::vector <std::string> component_names {max_ncp, ""};

but have to be REALLY careful not to trigger an initializer_list
constructor when things inside the {} is convertible to the vector value
type. Because then it will do something completely different!

Like

vector<int> v {max_ncp, 0};

will create a vector with two elements, max_ncp and 0. Sigh!

>
> Thanks,
> Lynn

Keith Thompson

unread,
Dec 7, 2022, 5:47:27 PM12/7/22
to
So the initial state of the vector should be that it has a size() of
max_ncp, and each of the max_ncp strings it contains is initialized to
-- what, the empty string?

Then either this:
std::vector<std::string> component_names(max_ncp);
or this:
std::vector<std::string> component_names(max_ncp, "");
should give you want you want. You said Visual Studio 2015 doesn't like
either of those, but you didn't say what it's complaint was.

Is the size of the vector going to change after it's been created?

Bo Persson

unread,
Dec 7, 2022, 5:48:52 PM12/7/22
to
That's for a local variable inside a function, where

std::vector <std::string> bar(x);

works.

However, as a struct member parenthesis are not allowed. Because reasons.


Keith Thompson

unread,
Dec 7, 2022, 5:53:57 PM12/7/22
to
Only if the size of the vector will never change after it's created (I
don't think you mentioned that). And in that case, you could use either
a std::array of std::string:

std::array<std::string, max_ncp> formulas;

or a plain array of std::string:

std::vector formulas[max_ncp];

Both require max_ncp to be a constant expression.

Lynn McGuire

unread,
Dec 7, 2022, 5:56:31 PM12/7/22
to
On 12/7/2022 4:53 PM, Keith Thompson wrote:
> Lynn McGuire <lynnmc...@gmail.com> writes:
>> On 12/7/2022 2:23 AM, Öö Tiib wrote:
>>> On Wednesday, 7 December 2022 at 07:42:49 UTC+2, Lynn McGuire wrote:
>>>> Is there a way to set the size of std::vector <std::string> formulas to
>>>> max_ncp size ?
>>>>
>>>> Something like "std::vector <std::string> formulas [max_ncp];"
>>> When that max_ncp is compile-time known immutable value then
>>> std::array<std::string, max_ncp> can be what you want.
>>> Maybe worth to note that I've seen reserve(n) of std::vector
>>> overused
>>> in practice as bad pessimization. Vector is typically designed so that
>>> we get log(max) allocations and max times element copies because
>>> of vector just growing freely during its life-time. That can be turned into
>>> way worse by someone micromanaging it.
>>
>> Now I am wondering if I should be using std::array instead of std::vector.
>
> Only if the size of the vector will never change after it's created (I
> don't think you mentioned that). And in that case, you could use either
> a std::array of std::string:
>
> std::array<std::string, max_ncp> formulas;
>
> or a plain array of std::string:
>
> std::vector formulas[max_ncp];
>
> Both require max_ncp to be a constant expression.

Yup, either would work well for me. max_ncp is a const int that only
changes at a far future recompile time.
const int max_ncp = 1000;

Thanks,
Lynn


Lynn McGuire

unread,
Dec 7, 2022, 6:08:43 PM12/7/22
to
Thanks,
Lynn

Lynn McGuire

unread,
Dec 7, 2022, 6:09:18 PM12/7/22
to
The complaint was strange. Like it was getting confused.

Thanks,
Lynn

Lynn McGuire

unread,
Dec 8, 2022, 2:44:40 AM12/8/22
to
On 12/6/2022 11:42 PM, Lynn McGuire wrote:
> Is there a way to set the size of std::vector <std::string> formulas to
> max_ncp size ?
>
> Something like "std::vector <std::string> formulas [max_ncp];"
>
> Thanks,
> Lynn

Thanks all !

Lynn

Juha Nieminen

unread,
Dec 8, 2022, 2:58:36 AM12/8/22
to
Paavo Helde <ees...@osa.pri.ee> wrote:
> 07.12.2022 11:13 Juha Nieminen kirjutas:
>> Öö Tiib <oot...@hot.ee> wrote:
>>> Maybe worth to note that I've seen reserve(n) of std::vector overused
>>> in practice as bad pessimization. Vector is typically designed so that
>>> we get log(max) allocations and max times element copies because
>>> of vector just growing freely during its life-time. That can be turned into
>>> way worse by someone micromanaging it.
>>
>> The typical std::vector implementation doubles its capacity every time it
>> needs to grow.
>
> Not any more. Nowadays a factor like 1.5 is more popular, meaning
> something like 18 allocations for 1000 push_back()-s.

The more the reason to do the initial reserve if you know the number of
elements that will be added.

> This is from MSVC++19:

I believe both libstdc++ (used by default by gcc) and libc++ (used by
default by clang) double the size. I doubt they are going to change
any time soon.

Obviously there are advantages and disadvantages in each percentage.

Juha Nieminen

unread,
Dec 8, 2022, 3:03:21 AM12/8/22
to
Keith Thompson <Keith.S.T...@gmail.com> wrote:
>> Now I am wondering if I should be using std::array instead of std::vector.
>
> Only if the size of the vector will never change after it's created

Also, you need to know the size at compile time, which isn't a trivial
restriction.

If the size is determined at compile time, std::array is *almost* always
a more efficient solution than std::vector.

*Almost* always. There are situations where std::vector can actually be
the more efficient solution. One such situation is if you find the need
to move the vector around a lot (ie. with its move constructor/assignment
operator).

Also, extremely large arrays (megabytes or more) might be better allocated
with std::vector, as a C-style array / std::array may start exhibiting some
problems when it becomes too large. (Heaven forbid you create one on the
stack!)

olcott

unread,
Jun 14, 2023, 6:47:15 PM6/14/23
to
So you do have technical competence.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Chris M. Thomasson

unread,
Jun 14, 2023, 7:00:42 PM6/14/23
to
On 6/14/2023 3:47 PM, olcott wrote:
> On 12/7/2022 8:38 AM, Bonita Montero wrote:
>> Am 07.12.2022 um 15:34 schrieb Paavo Helde:
>>
>>> Not any more. Nowadays a factor like 1.5 is more popular, ...
>>
>> Only MSVC uses 50% increments to satisfy the amortized constant
>> overhead constraint while inserting. libstdc++ and libc++ use a
>> 100% increment.
>>
>>
>
> So you do have technical competence.
>

Bonita Montero is not stupid.

olcott

unread,
Jun 14, 2023, 8:19:30 PM6/14/23
to
That would mean that she denies that this is correct knowing full well
that it is correct:

Termination analyzer H does correctly thwart what would be an otherwise
successful denial of service attack by pathological input D.

Bonita Montero

unread,
Jun 14, 2023, 10:39:35 PM6/14/23
to
Am 15.06.2023 um 00:47 schrieb olcott:
> On 12/7/2022 8:38 AM, Bonita Montero wrote:
>> Am 07.12.2022 um 15:34 schrieb Paavo Helde:
>>
>>> Not any more. Nowadays a factor like 1.5 is more popular, ...
>>
>> Only MSVC uses 50% increments to satisfy the amortized constant
>> overhead constraint while inserting. libstdc++ and libc++ use a
>> 100% increment.
>>
>>
>
> So you do have technical competence.

... but I won't repeat that a thousand times.


olcott

unread,
Jun 14, 2023, 11:32:18 PM6/14/23
to
It was only four days ago that I began talking about:

*Termination Analyzer H prevents Denial of Service attacks*
https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks

It is an easily verified fact that termination analyzer H does correctly
thwart what would otherwise be a successful denial of service attack
when presented with input D having the halting problem's pathological
relationship to H.

This proves that the halting problem's pathological input is not an
issue for actual software systems.

olcott

unread,
Jun 15, 2023, 12:55:25 PM6/15/23
to
A termination analyzer is an ordinary computer program that is supposed
to determine whether or not its input program will ever stop running or
gets stuck in infinite execution.

When a program input has been specifically defined to confuse a
termination analyzer it is correct to determine that the program
behavior is malevolent.

Prior to my work nothing could be done about inputs having a
pathological relationship to their termination analyzer. Prior to my
work Rice's theorem prevented this pathological relationship from being
recognized.

The pathological relationship is when an input program D is defined to
do the opposite of whatever its termination analyzer H says it will do.
If H says that D will stop running D runs an infinite loop. If H says
that D will never stop running, D immediately stops running.

When H(D,D) returns 0 this means that the input does not halt or the
input has pathological behavior that would otherwise cause the
termination analyzer to not halt. This means that the program has either
a non-termination bug or the program has malevolent behavior.

This reasoning completely overcomes the one key objection to my work
that has persisted for two years.

Tony Oliver

unread,
Jun 15, 2023, 8:16:57 PM6/15/23
to
On Wednesday, 14 June 2023 at 23:47:15 UTC+1, olcott wrote:
> On 12/7/2022 8:38 AM, Bonita Montero wrote:
> > Am 07.12.2022 um 15:34 schrieb Paavo Helde:
> >
> >> Not any more. Nowadays a factor like 1.5 is more popular, ...
> >
> > Only MSVC uses 50% increments to satisfy the amortized constant
> > overhead constraint while inserting. libstdc++ and libc++ use a
> > 100% increment.
> >
> >
> So you do have technical competence.

Everyone who participates in this newsgroup has technical competence,
You, however, cannot even recognise it.

Keith Thompson

unread,
Jun 15, 2023, 8:25:40 PM6/15/23
to
Tony Oliver <guinne...@gmail.com> writes:
> On Wednesday, 14 June 2023 at 23:47:15 UTC+1, olcott wrote:
[SNIP]
> Everyone who participates in this newsgroup has technical competence,
> You, however, cannot even recognise it.

Tony, please stop arguing with olcott outside of comp.theory.

My newsreader is already configured to hide anything olcott posts in
comp.lang.{c,c++}. If you continue, I'll configure it to hide your
posts as well.

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Will write code for food.
0 new messages