Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Update only if not already in array

147 views
Skip to first unread message

jonas.t...@gmail.com

unread,
Mar 1, 2017, 8:37:14 AM3/1/17
to
I update a subarray dependent upon another array of marked nodes.

The update seem to work fine but it should only push values to the array if they are not already in the array, i can see i probably should use arr.index.Of to see if it is within array, but what will it return if it is not inarray?

Because it is only then i should do the push to array?

function createMarkedLinks(){
for (var t=0;t<marklist.length-1;t++){
alert(marklist+" marklist length= "+marklist.length);
var j=0;
currentnode=marklist[t];
alert("currentnode="+ marklist[t]);
for (var s=1;s<marklist.length;s++){
nextnode=marklist[s];
inarray=arr.indexOf(nextnode);
if (inarray==undefined){
//What value will inarray have if nextnode not in array????
alert("nextnode="+ marklist[s]);
arr[currentnode].nodelinks.push(nextnode);
arr[nextnode].nodelinks.push(currentnode);
}
}
}
autotext="";
update_GraphData();
}


Julio Di Egidio

unread,
Mar 1, 2017, 8:53:19 AM3/1/17
to
On Wednesday, March 1, 2017 at 2:37:14 PM UTC+1, jonas.t...@gmail.com wrote:

> inarray=arr.indexOf(nextnode);
> if (inarray==undefined){
> //What value will inarray have if nextnode not in array????

<https://www.google.com/search?q=mdn+javascript+array+indexof>

Julio

Evertjan.

unread,
Mar 1, 2017, 9:32:44 AM3/1/17
to
jonas.t...@gmail.com wrote on 01 Mar 2017 in comp.lang.javascript:

> inarray=arr.indexOf(nextnode);
> if (inarray==undefined){

What nonsense 'undefined', Jonas, don't you ever read the specs?

Mozolla.org shows this:

"The first index of the element in the array; -1 if not found."

function updateVegetablesCollection (veggies, veggie) {
if (veggies.indexOf(veggie) === -1) {
veggies.push(veggie);
console.log('New veggies collection is : ' + veggies);
} else if (veggies.indexOf(veggie) > -1) {
console.log(veggie + ' already exists in the veggies collection.');
}
}

var veggies = ['potato', 'tomato', 'chillies', 'green-pepper'];

updateVegetablesCollection(veggies, 'spinach');
// New veggies collection is : potato,tomato,chillies,green-papper,spinach
updateVegetablesCollection(veggies, 'spinach');
// spinach already exists in the veggies collection.



--
Evertjan.
The Netherlands.
(Please change the x'es to dots in my emailaddress)

jonas.t...@gmail.com

unread,
Mar 1, 2017, 10:29:33 AM3/1/17
to
Thanks Julio and Evertjan works now.

function createMarkedLinks(){
for (var t=0;t<marklist.length-1;t++){
alert(marklist+" marklist length= "+marklist.length);
var j=0;
currentnode=marklist[t];
alert("currentnode="+ marklist[t]);
for (var s=1;s<marklist.length;s++){
nextnode=marklist[s];
inarray=arr[currentnode].nodelinks.indexOf(nextnode);
if (inarray==-1){

jonas.t...@gmail.com

unread,
Mar 1, 2017, 3:04:59 PM3/1/17
to
Well the final solutions work but it is rather ugly, there probably is a standardised way to increment 2 counters?

Say marklist contain N 4,5,6,7 i do generate the following links on each node using the counters. But the code isn't directly easy to follow.
N 4 Links-> 5,6,7
N 5 Links-> 4,6,7
N 6 Links-> 4,5,7
N 7 Links-> 4,5,6


function createMarkedLinks(){
var y=0;
marklist.sort();
for (var t=0;t<marklist.length-1;t++){
var x=y;
y++;
currentnode=marklist[t];
for (var s=1+x;s<marklist.length;s++){
nextnode=marklist[s];
x++;
inarray=arr[currentnode].nodelinks.indexOf(nextnode);
if (inarray==-1){
Message has been deleted

Evertjan.

unread,
Mar 3, 2017, 7:02:43 AM3/3/17
to
jonas.t...@gmail.com wrote on 03 Mar 2017 in comp.lang.javascript:

> I wonder if you could

"you" as in "one"?

> extend the code to a general permutation algorithm?

What is a "a general permutation algorithm"?

Do you mean you want a function that does something,
having externally defined arrays as input-parameters?

> extend the code

Here is the problem.

"You" do not "extend" a convoluted code to make it more general,
as its convolution will grow [perhaps even exponentially].

"You" start over again, explaining to yourself what such code must/should
do, and then build an experimental function in a test-file full of
breakpoints and/or console.log lines.

jonas.t...@gmail.com

unread,
Mar 3, 2017, 12:07:57 PM3/3/17
to
Well EvertJan my code do create the sequense of pairings of nodes needed for all to all, and it do it in a rather efficient manner. That i can't see the most beautiful way to do it is rather another issue, i am not exactly a code guru, but i am handyman that cuts corners using the little knowledge i have.

I am on to next problem now, but of course i do know you people alot more skilled in both javascript and programming so i would urge you to post a more neat and tidy solution. And for sure if i just understand the approach i will use that instead, that is my purpose posting my halfbaked code attempts i do know i solve any problem, but i also know you people can beautify it and sometimes even make it more efficient.

Sometimes formulating the problem solve it for me, that is why i post things here. The process to write down the nuts and bolts often lead to the solution. I probably could do it for myself but that would not be half the fun, because sometimes you guys help out even when you resist.

jonas.t...@gmail.com

unread,
Mar 3, 2017, 12:13:05 PM3/3/17
to
A general permutation algorithm do differ from a counter Evertjan think about it. It use input values to create the sequense of the algorithm. Now one can argue that the possible outcome can not have more/higher entropy than the input, well at least it do create some diffusion.

Evertjan.

unread,
Mar 3, 2017, 12:20:20 PM3/3/17
to
jonas.t...@gmail.com wrote on 03 Mar 2017 in comp.lang.javascript:

> Well EvertJan my code do create the sequense of pairings of nodes needed
> for all to all, and it do it in a rather efficient manner. That i can't
> see the most beautiful way to do it is rather another issue, i am not
> exactly a code guru, but i am handyman that cuts corners using the
> little knowledge i have.

I don't doubt your believe in yourself, but doubt your ability to judge that
correctly.

> I am on to next problem now, but of course i do know you people alot
> more skilled in both javascript and programming so i would urge you to
> post a more neat and tidy solution.

Well that would be counterproductive, as the best we can do is help you get
better, not doing your work.

> And for sure if i just understand
> the approach i will use that instead, that is my purpose posting my
> halfbaked code attempts i do know i solve any problem, but i also know
> you people can beautify it and sometimes even make it more efficient.
>
> Sometimes formulating the problem solve it for me, that is why i post
> things here.

If it does, why bother us with it?

> The process to write down the nuts and bolts often lead to
> the solution. I probably could do it for myself but that would not be
> half the fun, because sometimes you guys help out even when you resist.

Indeed, I hope my resisting does help you get better at it.

==============================

But could you please answer the Qs I asked you?

> > I wonder if you could
>
> "you" as in "one"?
>
> > extend the code to a general permutation algorithm?
>
> What is a "a general permutation algorithm"?
>
> Do you mean you want a function that does something,
> having externally defined arrays as input-parameters?



jonas.t...@gmail.com

unread,
Mar 3, 2017, 5:30:53 PM3/3/17
to
Well you see if Evertjan if you can solve RSA on an atari emultating a mac, i really think you can solve any problem. Well if it is indeed solvable.

Evertjan.

unread,
Mar 3, 2017, 5:41:40 PM3/3/17
to
jonas.t...@gmail.com wrote on 03 Mar 2017 in comp.lang.javascript:

>> > > extend the code to a general permutation algorithm?
>> >
>> > What is a "a general permutation algorithm"?
>> >
>> > Do you mean you want a function that does something,
>> > having externally defined arrays as input-parameters?


> Well you see if Evertjan if you can solve RSA

I don't see. What is RSA?

> on an atari emultating a
> mac, i really think you can solve any problem.

"You" as in "One"?

> Well if it is indeed solvable.

My problem with you is that you don't answer my Qs.

Is that unsolvable?

jonas.t...@gmail.com

unread,
Mar 3, 2017, 5:49:04 PM3/3/17
to
Well RSA is an encryption scheme using two large primes creating a composite number. Still in use...

jonas.t...@gmail.com

unread,
Mar 3, 2017, 5:52:09 PM3/3/17
to
Den fredag 3 mars 2017 kl. 23:41:40 UTC+1 skrev Evertjan.:
Well anyone fit to solve composite number decomposition probably fit to solve any problem.

I can give you a hint star with the easy ones, 21

Evertjan.

unread,
Mar 3, 2017, 6:07:45 PM3/3/17
to
jonas.t...@gmail.com wrote on 03 Mar 2017 in comp.lang.javascript:

> Well anyone fit to solve composite number decomposition probably fit to
> solve any problem.

A bold statement that needs evidence.

I don't think so, having been a family doctor for most of my life.

jonas.t...@gmail.com

unread,
Mar 3, 2017, 7:01:04 PM3/3/17
to

Evertjan.

unread,
Mar 4, 2017, 6:35:55 AM3/4/17
to
jonas.t...@gmail.com wrote on 04 Mar 2017 in comp.lang.javascript:

> Den l혀rdag 4 mars 2017 kl. 00:07:45 UTC+1 skrev Evertjan.:
>> jonas.t...@gmail.com wrote on 03 Mar 2017 in comp.lang.javascript:
>>
>> > Well anyone fit to solve composite number decomposition probably fit
>> > to solve any problem.
>>
>> A bold statement that needs evidence.
>>
>> I don't think so, having been a family doctor for most of my life.

[please do not quote signatures on usenet, signature removed]

> Well they chosed RSA because integer factorization is "thought" to be a
> hard problem. https://en.wikipedia.org/wiki/Integer_factorization

I wouldn'd say 'they chosed', but anyway, that is not an argument against my
argument that it is FALSE that all problems can be solved by 'anyone fit to
solve composite number decomposition'.

Supposing you are both 'integer' and 'fit to solve composite number
decomposition', just for the sake of agrument, could you solve the problem
that we cannot find the supposed 50% antimatter in our universe?

jonas.t...@gmail.com

unread,
Mar 4, 2017, 9:00:43 AM3/4/17
to
Well Evertjan as i understand it the expansion of universe is a "conjecture" due to the observed redshift. And the antimatter is needed to explain the expansion...

You really suggest me to go on try solve something that may be totally wrong from start to end?

No Everjan i deal with realworld problems not fictional ones.

Evertjan.

unread,
Mar 4, 2017, 9:24:29 AM3/4/17
to
jonas.t...@gmail.com wrote on 04 Mar 2017 in comp.lang.javascript:

> Well Evertjan as i understand it the expansion of universe is a
> "conjecture" due to the observed redshift.

Good thing the universe is not as you understand it,
and don't claim that it is so,
when I don't come up with a better understanding.

> And the antimatter is needed to explain the expansion...

You are not explaining such need, just conjecturing authority.

> You really suggest me to go on try solve something that may be totally
> wrong from start to end?

No, I asked you to show that your stand that 'anyone fit to solve composite
number decomposition' can solve anything.

Even if you were able to solve my example, that would only mean that
'someone' could, not hat 'anyone' could.

> No Everjan i deal with realworld problems not fictional ones.

You seem to have no idea about the problems of the real world,
also called 'our universe'.

I submit that algoritms are fictional problems,
while disease, lonelyness, global warming, cultural misogeny,
child abuse, religion, blind faith, to name a few, are the real ones.

But be my guest and retract into your pseudo-reality of algoritms.

Michael Haufe (TNO)

unread,
Mar 4, 2017, 10:02:43 AM3/4/17
to
jonas.t...@gmail.com wrote:

> Well they chosed RSA because integer factorization is "thought" to be a hard problem.

You seem to deeply misunderstand what a "hard" problem is in Computer Science. This is not a matter of opinion, and no scare quotes are needed. This is a well defined. I invite you to learn the basics of the Theory of Computation before making such claims.

Here is an intro on the most famous Complexity Class:

<https://www.youtube.com/watch?v=msp2y_Y5MLE>

Ben Bacarisse

unread,
Mar 4, 2017, 10:51:48 AM3/4/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> jonas.t...@gmail.com wrote:
>
>> Well they chosed RSA because integer factorization is "thought" to be
>> a hard problem.
>
> You seem to deeply misunderstand what a "hard" problem is in Computer
> Science. This is not a matter of opinion, and no scare quotes are
> needed. This is a well defined.

NP-hard is precisely defined, but would you say that when someone talks
about a hard problem they always mean NP-hard? I suppose that's the
natural thing to assume when talking about complexity but I wouldn't be
certain. It still carries a strong informal meaning.

Anyway, if we take hard to mean NP-hard then integer factorisation is
not know to be hard, and most people would say that they don't expect it
to turn out to be. The key new fact being that we now know that
primality testing is in P which puts the decision problem version of
factorisation in co-NP.

<snip>
--
Ben.

Michael Haufe (TNO)

unread,
Mar 4, 2017, 12:57:20 PM3/4/17
to
Ben Bacarisse wrote:

> NP-hard is precisely defined, but would you say that when someone talks
> about a hard problem they always mean NP-hard? I suppose that's the
> natural thing to assume when talking about complexity but I wouldn't be
> certain. It still carries a strong informal meaning.

Of course in general conversation, no. Nor in general conversation about programming. Most programmers and developers have no formal education and therefore carry their own bias with their terminology. Confusion is caused when the same word is used with different meanings by people in the same topic being discussed and should be avoided. When Jonas moved from informal conversation to criticism regarding factorization he moved into a field where the terminology is is not so informal.

In most writings IME, a "small" problem is computable in polynomial time (such as n^3). Whereas a "large" problem would be computable in exponential time (such as 2^n). For example:

If I had two functions, each with a respective running time of the above:

// O(n^3)
function small(n){ ... }

// O(2^n)
function big(n){ ... }

small(1000) // 1 billion steps to complete
big(1000) // 1 x 10^301 steps to complete

If you graphed n^3 and 2^n you can see the shape of the problem [1][2]

Faster computers simply shift the curve left, but don't change the shape of the graph.

> Anyway, if we take hard to mean NP-hard then integer factorisation is
> not know to be hard, and most people would say that they don't expect it
> to turn out to be. The key new fact being that we now know that
> primality testing is in P which puts the decision problem version of
> factorisation in co-NP.

Factorization is considered "hard" or "difficult" presently [3] because you have to use brute force to search for possible solutions instead of deriving them directly from what was computed already

[1] <http://www.wolframalpha.com/input/?i=Plot%5Bn%5E3,+%7Bn,+0,+1000%7D%5D>
[2] <http://www.wolframalpha.com/input/?i=Plot%5B2%5En,+%7Bn,+0,+1000%7D%5D>
[3] Excluding some Quantum Computing algorithm which can do this theoretically in polynomial time, but have in reality only factored very small numbers <https://en.wikipedia.org/wiki/Integer_factorization_records#Records_for_efforts_by_quantum_computers>

Ben Bacarisse

unread,
Mar 4, 2017, 3:56:48 PM3/4/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> Ben Bacarisse wrote:
>
>> NP-hard is precisely defined, but would you say that when someone talks
>> about a hard problem they always mean NP-hard? I suppose that's the
>> natural thing to assume when talking about complexity but I wouldn't be
>> certain. It still carries a strong informal meaning.
>
> Of course in general conversation, no. Nor in general conversation
> about programming. Most programmers and developers have no formal
> education and therefore carry their own bias with their
> terminology. Confusion is caused when the same word is used with
> different meanings by people in the same topic being discussed and
> should be avoided. When Jonas moved from informal conversation to
> criticism regarding factorization he moved into a field where the
> terminology is is not so informal.
>
> In most writings IME, a "small" problem is computable in polynomial
> time (such as n^3). Whereas a "large" problem would be computable in
> exponential time (such as 2^n).

That's interesting. I've not come across that usage. I would take the
terms small and large to refer to the quantity of data.

> For example:
>
> If I had two functions, each with a respective running time of the above:
>
> // O(n^3)
> function small(n){ ... }
>
> // O(2^n)
> function big(n){ ... }

I wanted to mention a pet peeve of mine, but I see from Wikipedia that
it's becoming accepted to use O(f) in this way. But in case there is
some chance save the old meaning, I'll just say that when I leaned about
it, Big-O was only an (asymptotic) upper bound, so the function called
small is also O(2^n), and the one called big might be O(n^3). (A lower
bound was given by Omega and both bounds by Theta.)

> small(1000) // 1 billion steps to complete
> big(1000) // 1 x 10^301 steps to complete

Or almost any other number of steps, surely? The order of growth says
only that steps(small(2*x)) ~ 8*steps(small(x)).

<snip>
>> Anyway, if we take hard to mean NP-hard then integer factorisation is
>> not know to be hard, and most people would say that they don't expect it
>> to turn out to be. The key new fact being that we now know that
>> primality testing is in P which puts the decision problem version of
>> factorisation in co-NP.
>
> Factorization is considered "hard" or "difficult" presently [3]
> because you have to use brute force to search for possible solutions
> instead of deriving them directly from what was computed already

That's probably how Jonas meant it, too. Was your complaint that he
should have put "hard" in scare quotes rather than "thought"? If so, I
misunderstood your original reply. Sorry about that.

<snip>
--
Ben.

Michael Haufe (TNO)

unread,
Mar 4, 2017, 6:30:04 PM3/4/17
to
Ben Bacarisse wrote:
> "Michael Haufe (TNO)" writes:

> > In most writings IME, a "small" problem is computable in polynomial
> > time (such as n^3). Whereas a "large" problem would be computable in
> > exponential time (such as 2^n).
>
> That's interesting. I've not come across that usage. I would take the
> terms small and large to refer to the quantity of data.

The amount of data in this sense would be the specific value of 'n'.

In the context of algorithms, I've also seen common use of "simple"/"complex" to refer to the same.

> > For example:
> >
> > If I had two functions, each with a respective running time of the above:
> >
> > // O(n^3)
> > function small(n){ ... }
> >
> > // O(2^n)
> > function big(n){ ... }
>
> I wanted to mention a pet peeve of mine, but I see from Wikipedia that
> it's becoming accepted to use O(f) in this way. But in case there is
> some chance save the old meaning, I'll just say that when I leaned about
> it, Big-O was only an (asymptotic) upper bound, so the function called
> small is also O(2^n), and the one called big might be O(n^3). (A lower
> bound was given by Omega and both bounds by Theta.)

What you learned is still true. I'm abusing notation slightly which is sadly a bad habit I've picked up and is a bit common:
<https://en.wikipedia.org/wiki/Big_O_notation#Matters_of_notation>

> > small(1000) // 1 billion steps to complete
> > big(1000) // 1 x 10^301 steps to complete
>
> Or almost any other number of steps, surely? The order of growth says
> only that steps(small(2*x)) ~ 8*steps(small(x)).

Any *fewer* number of steps, yes. Constant factors are irrelevant at scale and are ignored:

<https://en.wikipedia.org/wiki/Big_O_notation#Multiplication_by_a_constant>

> > Factorization is considered "hard" or "difficult" presently [3]
> > because you have to use brute force to search for possible solutions
> > instead of deriving them directly from what was computed already
>
> That's probably how Jonas meant it, too. Was your complaint that he
> should have put "hard" in scare quotes rather than "thought"? If so, I
> misunderstood your original reply. Sorry about that.

The way I read that statement "... 'thought' to be a hard problem ..." was that he disagreed that it was. My interpretation is admittedly colored by his other statements in this thread and other threads. I could also be hitting a language barrier here...

jonas.t...@gmail.com

unread,
Mar 4, 2017, 6:37:48 PM3/4/17
to
Well thought suggest that people agree upon it is a hard problem, but there is no proof it actually is one. And if it indeed solved during a couple on a 16 Mhz computer it probably "it probably just thought to be a hard problem".

Ben Bacarisse

unread,
Mar 4, 2017, 7:29:12 PM3/4/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> Ben Bacarisse wrote:
>> "Michael Haufe (TNO)" writes:
<snip>
>> > If I had two functions, each with a respective running time of the above:
>> >
>> > // O(n^3)
>> > function small(n){ ... }
>> >
>> > // O(2^n)
>> > function big(n){ ... }
>>
>> I wanted to mention a pet peeve of mine, but I see from Wikipedia that
>> it's becoming accepted to use O(f) in this way. But in case there is
>> some chance save the old meaning, I'll just say that when I leaned about
>> it, Big-O was only an (asymptotic) upper bound, so the function called
>> small is also O(2^n), and the one called big might be O(n^3). (A lower
>> bound was given by Omega and both bounds by Theta.)
>
> What you learned is still true. I'm abusing notation slightly which is
> sadly a bad habit I've picked up and is a bit common:
> <https://en.wikipedia.org/wiki/Big_O_notation#Matters_of_notation>
>
>> > small(1000) // 1 billion steps to complete
>> > big(1000) // 1 x 10^301 steps to complete
>>
>> Or almost any other number of steps, surely? The order of growth says
>> only that steps(small(2*x)) ~ 8*steps(small(x)).
>
> Any *fewer* number of steps, yes.

Why only fewer? There's no reason that small(1000) might not take 2
billion steps and/or big(1000) might take 10^500 steps. You can't tell
anything about the number of steps taken by one call when all you know
is the order of growth.

<snip>
--
Ben.

Ben Bacarisse

unread,
Mar 4, 2017, 7:41:41 PM3/4/17
to
jonas.t...@gmail.com writes:

> Den söndag 5 mars 2017 kl. 00:30:04 UTC+1 skrev Michael Haufe (TNO):
>> Ben Bacarisse wrote:
<snip>
>> > That's probably how Jonas meant it, too. Was your complaint that he
>> > should have put "hard" in scare quotes rather than "thought"? If so, I
>> > misunderstood your original reply. Sorry about that.
>>
>> The way I read that statement "... 'thought' to be a hard problem
>> ..." was that he disagreed that it was. My interpretation is
>> admittedly colored by his other statements in this thread and other
>> threads. I could also be hitting a language barrier here...
>
> Well thought suggest that people agree upon it is a hard problem, but
> there is no proof it actually is one.

Which bit are you complaining about? The complexity class of
factorisation was not known then and is not known today so all opinions
about it are just "thoughts" rather than proofs. Do you object to the
fact that most well-informed people agreed on the notion that it appears
to be hard? Everywhere but on Usenet that's often a sign that the
opinion is worth listening to.

> And if it indeed solved during a couple on a 16 Mhz computer it
> probably "it probably just thought to be a hard problem".

I've no idea what that means, but that's a language thing, I think.

--
Ben.

Michael Haufe (TNO)

unread,
Mar 5, 2017, 9:26:48 AM3/5/17
to
On Saturday, March 4, 2017 at 6:29:12 PM UTC-6, Ben Bacarisse wrote:

> Why only fewer? There's no reason that small(1000) might not take 2
> billion steps and/or big(1000) might take 10^500 steps.

Because it is true by definition in this case (which is why I left out the body of the functions).

> You can't tell anything about the number of steps taken by one call when all you know is the order of growth.

You can, which is why it is used. It is an abstraction and classification of algorithms.

Big-O states that the number of steps will be at-most that value, but may be less. If I said Θ(n) then it would be exactly n.

if both small() and big() were simply for-loops that incremented some value and returned, then you could state that the running time would be exactly the order of growth. Now if you allowed continue or break statements in the loop then you would know that in the worst case scenario it would not execute MORE than the order of growth, but may execute fewer.

Ben Bacarisse

unread,
Mar 5, 2017, 10:00:06 AM3/5/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> On Saturday, March 4, 2017 at 6:29:12 PM UTC-6, Ben Bacarisse wrote:
>
>> Why only fewer? There's no reason that small(1000) might not take 2
>> billion steps and/or big(1000) might take 10^500 steps.
>
> Because it is true by definition in this case (which is why I left out
> the body of the functions).

I don't follow. By definition of what?

>> You can't tell anything about the number of steps taken by one call when all you know is the order of growth.
>
> You can, which is why it is used. It is an abstraction and
> classification of algorithms.
>
> Big-O states that the number of steps will be at-most that value, but
> may be less. If I said Θ(n) then it would be exactly n.

No, that's not right. I'm pretty sure you know the definitions so I'm
not sure what the source of the confusion is.

> if both small() and big() were simply for-loops that incremented some
> value and returned, then you could state that the running time would
> be exactly the order of growth. Now if you allowed continue or break
> statements in the loop then you would know that in the worst case
> scenario it would not execute MORE than the order of growth, but may
> execute fewer.

You didn't describe the function. You just said that small(1000) takes
a billion steps and can take fewer but can't take more. Given a full
description of small we probably don't need Big-O at all; we could just
write the number of steps as a function of n.

But without knowing what small is doing -- knowing only that it's order
of growth is O(n^3) the number of steps taken for any one n is entirely
unknown. That's that position I was in when I said that.

--
Ben.

Michael Haufe (TNO)

unread,
Mar 6, 2017, 10:50:36 AM3/6/17
to
Ben Bacarisse wrote:
> "Michael Haufe (TNO)" writes:
>
> > Ben Bacarisse wrote:
> >
> >> Why only fewer? There's no reason that small(1000) might not take 2
> >> billion steps and/or big(1000) might take 10^500 steps.
> >
> > Because it is true by definition in this case (which is why I left out
> > the body of the functions).
>
> I don't follow. By definition of what?

I stated the limiting behavior on the functions:

// O(n^3)
function small(n){ ... }

// O(2^n)
function big(n){ ... }

> >> You can't tell anything about the number of steps taken by one call when all you know is the order of growth.
> >
> > You can, which is why it is used. It is an abstraction and
> > classification of algorithms.
> >
> > Big-O states that the number of steps will be at-most that value, but
> > may be less. If I said Θ(n) then it would be exactly n.
>
> No, that's not right. I'm pretty sure you know the definitions so I'm
> not sure what the source of the confusion is.

Big-O is the upper-bound specifically. Whatever the input,
the resulting time will not exceed this bound by definition,
otherwise it wouldn't be the upper bound.

> > if both small() and big() were simply for-loops that incremented some
> > value and returned, then you could state that the running time would
> > be exactly the order of growth. Now if you allowed continue or break
> > statements in the loop then you would know that in the worst case
> > scenario it would not execute MORE than the order of growth, but may
> > execute fewer.
>
> You didn't describe the function. You just said that small(1000) takes
> a billion steps and can take fewer but can't take more. Given a full
> description of small we probably don't need Big-O at all; we could just
> write the number of steps as a function of n.

The point is that it doesn't matter if the upper bound is known.
It's an abstraction.

> But without knowing what small is doing -- knowing only that it's order
> of growth is O(n^3) the number of steps taken for any one n is entirely
> unknown. That's that position I was in when I said that.

What is known is that whatever it takes to perform it's duty it won't
be MORE than O(n^3).

If you knew it would take AT LEAST as long or precisely as long,
you would use different notations:

<https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations>

Ben Bacarisse

unread,
Mar 6, 2017, 7:50:38 PM3/6/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> Ben Bacarisse wrote:
>> "Michael Haufe (TNO)" writes:
>>
>> > Ben Bacarisse wrote:
>> >
>> >> Why only fewer? There's no reason that small(1000) might not take 2
>> >> billion steps and/or big(1000) might take 10^500 steps.
>> >
>> > Because it is true by definition in this case (which is why I left out
>> > the body of the functions).
>>
>> I don't follow. By definition of what?
>
> I stated the limiting behavior on the functions:
>
> // O(n^3)
> function small(n){ ... }
>
> // O(2^n)
> function big(n){ ... }

That gives the asymptotic upper bound. You can't say anything about
small(1000) from the fact that small(n) is O(n^3).

Here are two other function that are O(n^3):

function small1(n) { if n < 1000000 then return 1 else return small(n) }

function small2(n) { repeat 99 times small(n); return small(n) }

Knowing only the asymptotic upper bound of n^3, you can't say anything
about the number of steps taken by small1(1000) or small2(1000).

>> >> You can't tell anything about the number of steps taken by one call when all you know is the order of growth.
>> >
>> > You can, which is why it is used. It is an abstraction and
>> > classification of algorithms.
>> >
>> > Big-O states that the number of steps will be at-most that value, but
>> > may be less. If I said Θ(n) then it would be exactly n.
>>
>> No, that's not right. I'm pretty sure you know the definitions so I'm
>> not sure what the source of the confusion is.
>
> Big-O is the upper-bound specifically. Whatever the input,
> the resulting time will not exceed this bound by definition,
> otherwise it wouldn't be the upper bound.

It's not an upper bound in that sense -- it's the asymptotic upper
bound, up to a constant factor. It states (in this case) that there is
some constant C and some integer M such that

steps_taken_by(small(k)) > C*k^3 for all k > M.

small(1000) can take *any* finite number of steps at all despite knowing
that it's O(n^3).

<snip>
>> But without knowing what small is doing -- knowing only that it's order
>> of growth is O(n^3) the number of steps taken for any one n is entirely
>> unknown. That's that position I was in when I said that.
>
> What is known is that whatever it takes to perform it's duty it won't
> be MORE than O(n^3).
>
> If you knew it would take AT LEAST as long or precisely as long,
> you would use different notations:
>
> <https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations>

I really do know what these notations mean. None of them can tell you
anything about the cost of a single call like small(1000).

--
Ben.

Thomas 'PointedEars' Lahn

unread,
Mar 7, 2017, 12:46:10 AM3/7/17
to
Ben Bacarisse wrote:

> "Michael Haufe (TNO)" <t...@thenewobjective.com> writes:
>> Big-O is the upper-bound specifically. Whatever the input,
>> the resulting time will not exceed this bound by definition,
>> otherwise it wouldn't be the upper bound.
>
> It's not an upper bound in that sense -- it's the asymptotic upper
> bound, up to a constant factor. It states (in this case) that there
> is some constant C and some integer M such that
>
> steps_taken_by(small(k)) > C*k^3 for all k > M.

No, it does not.

--
PointedEars
FAQ: <http://PointedEars.de/faq> | <http://PointedEars.de/es-matrix>
<https://github.com/PointedEars> | <http://PointedEars.de/wsvn/>
Twitter: @PointedEars2 | Please do not cc me./Bitte keine Kopien per E-Mail.

Ben Bacarisse

unread,
Mar 7, 2017, 6:23:42 AM3/7/17
to
Thomas 'PointedEars' Lahn <Point...@web.de> writes:

> Ben Bacarisse wrote:
>
>> "Michael Haufe (TNO)" <t...@thenewobjective.com> writes:
>>> Big-O is the upper-bound specifically. Whatever the input,
>>> the resulting time will not exceed this bound by definition,
>>> otherwise it wouldn't be the upper bound.
>>
>> It's not an upper bound in that sense -- it's the asymptotic upper
>> bound, up to a constant factor. It states (in this case) that there
>> is some constant C and some integer M such that
>>
>> steps_taken_by(small(k)) > C*k^3 for all k > M.

steps_taken_by(small(k)) < C*k^3 for all k > M.

(Some people use <= and >= for one or other but that makes no difference.)

> No, it does not.

Indeed, a typo. Surely just pointing out the swapped comparison would
have been more helpful? Anyway, thanks for noticing and if there is
anything else, maybe you could say what?

--
Ben.

Thomas 'PointedEars' Lahn

unread,
Mar 7, 2017, 9:34:35 AM3/7/17
to
Ben Bacarisse wrote:

> Thomas 'PointedEars' Lahn <Point...@web.de> writes:
>> Ben Bacarisse wrote:
>>> "Michael Haufe (TNO)" <t...@thenewobjective.com> writes:
>>>> Big-O is the upper-bound specifically. Whatever the input,
>>>> the resulting time will not exceed this bound by definition,
>>>> otherwise it wouldn't be the upper bound.
>>> It's not an upper bound in that sense -- it's the asymptotic upper
>>> bound, up to a constant factor. It states (in this case) that there
>>> is some constant C and some integer M such that
>>> steps_taken_by(small(k)) > C*k^3 for all k > M.
>
> steps_taken_by(small(k)) < C*k^3 for all k > M.
>
> (Some people use <= and >= for one or other but that makes no difference.)

Yes, it does.

>> No, it does not.
>
> Indeed, a typo. Surely just pointing out the swapped comparison would
> have been more helpful?

No, blankly, firmly, and concisely denying your *unsubstantiated claim* made
you think more, caused less noise, and saved me more time, in total. While
it required you to invest the time that you thought you could save and still
get away with it, shifting the burden of proof.

“The amount of energy necessary to refute bullshit is an order of magnitude
bigger than to produce it.” (Alberto Brandolini, programmer, 2014)

> Anyway, thanks for noticing

You’re welcome.

> and if there is anything else, maybe you could say what?

Maybe. Or maybe you should learn how to ask questions the smart way, and
how to post on topic.

Ben Bacarisse

unread,
Mar 7, 2017, 11:47:59 AM3/7/17
to
Thomas 'PointedEars' Lahn <Point...@web.de> writes:
<snip>
> Maybe. Or maybe you should learn how to ask questions the smart way, and
> how to post on topic.

Why are you posting on this topic?

--
Ben.

Thomas 'PointedEars' Lahn

unread,
Mar 7, 2017, 11:56:03 AM3/7/17
to
(You have destroyed the context of my statement.)

Obviously, to point out false, unsubstantiated claims in a discussion
that I consider off-topic here, too.

Why do you people keep feeding those Google Groups trolls?

Ben Bacarisse

unread,
Mar 7, 2017, 12:28:16 PM3/7/17
to
Thomas 'PointedEars' Lahn <Point...@web.de> writes:

> Ben Bacarisse wrote:
>
>> Thomas 'PointedEars' Lahn <Point...@web.de> writes:
>>> Maybe. Or maybe you should learn how to ask questions the smart way, and
>>> how to post on topic.
>>
>> Why are you posting on this topic?
>
> (You have destroyed the context of my statement.)
>
> Obviously, to point out false, unsubstantiated claims in a discussion
> that I consider off-topic here, too.

That's what I was doing. Why are the rules different for you?

> Why do you people keep feeding those Google Groups trolls?

Michael Haufe is not a Google Groups troll. He replied to someone
matching that description, but I did not. You could have pointed out
the lack or topicality or any number of false unsubstantiated claims
long before but it seems it seems to be me who fascinates (or maybe just
irritates) you enough to do that.

--
Ben.

Thomas 'PointedEars' Lahn

unread,
Mar 7, 2017, 3:07:09 PM3/7/17
to
Ben Bacarisse wrote:

> Thomas 'PointedEars' Lahn <Point...@web.de> writes:
>> Ben Bacarisse wrote:
>>> Thomas 'PointedEars' Lahn <Point...@web.de> writes:
>>>> Maybe. Or maybe you should learn how to ask questions the smart way,
>>>> and how to post on topic.
>>> Why are you posting on this topic?
>> (You have destroyed the context of my statement.)
>>
>> Obviously, to point out false, unsubstantiated claims in a discussion
>> that I consider off-topic here, too.
>
> That's what I was doing.

You were doing both less and more than that, and you know it.

> Why are the rules different for you?

They are not. It is a matter of dosage, so to speak. What does
this discussion have to do with the topic of this newsgroup *now*?

>> Why do you people keep feeding those Google Groups trolls?
>
> Michael Haufe is not a Google Groups troll.

I did not say that he were.

> He replied to someone matching that description,

Exactly.

> but I did not. You could have pointed out the lack or topicality or any
> number of false unsubstantiated claims long before but it seems it seems
> to be me who fascinates (or maybe just irritates) you enough to do that.

You are overrating the importance of Usenet and yourself to me.

Contrary to common misconception, I do have a life.

F'up2 poster

Flippie Flink

unread,
Mar 7, 2017, 4:29:16 PM3/7/17
to
On 07-03-17 21:07, Thomas 'PointedEars' Lahn wrote:

>> but I did not. You could have pointed out the lack or topicality or any
>> number of false unsubstantiated claims long before but it seems it seems
>> to be me who fascinates (or maybe just irritates) you enough to do that.
>
> You are overrating the importance of Usenet and yourself to me.
>
> Contrary to common misconception, I do have a life.

Do you? Then why don't you spend more time on it?

"...blankly, firmly, and concisely denying your *unsubstantiated claim* made
you think more, caused less noise, and saved me more time, in total."

How much time did you really save considering those three extra replies you
gave after the "no, it does not"? Seems to me like you made the wrong
decision somewhere.

--
Flinke Flip

Michael Haufe (TNO)

unread,
Mar 11, 2017, 9:24:30 PM3/11/17
to
Ben Bacarisse wrote:

> That gives the asymptotic upper bound. You can't say anything about
> small(1000) from the fact that small(n) is O(n^3).
>
> Here are two other function that are O(n^3):
>
> function small1(n) { if n < 1000000 then return 1 else return small(n) }
>
> function small2(n) { repeat 99 times small(n); return small(n) }
>
> Knowing only the asymptotic upper bound of n^3, you can't say anything
> about the number of steps taken by small1(1000) or small2(1000).

> I really do know what these notations mean. None of them can tell you
> anything about the cost of a single call like small(1000).

I read back through our discussion and did a refresher on some notes and I think we're talking towards two different but related things.

What I was trying to state was that any multiple of the growth rate of a function is irrelevant as they fall into the same category and therefore I can ignore what that multiple is:

f(x) = x^2 + 2x + 1
g(x) = 4x^2
h(x) = x^2

O(f(x)) = O(g(x)) = O(h(x)) = O(x^2)

If I understand your argument correctly, you're saying that it may be true that f(x) is O(x^2) eventually, but at some points in the domain it can be far greater than that which wouldn't fall into such a categorization, such as:

f(n) = n^2, n >= 1000
f(n) = n^9, n < 1000

I wasn't sure of the answer to this, so I reviewed one of my Algorithms texts [1] and surprisingly on page 45 it has a graph and description which reflects this very issue.

What we both overlooked (or forgot) is that O(f(n)) is defined with a minimum constraint as part of the definition that would exclude those lower values. So in the above function it would still be considered O(x^2). From p. 47 of the text:

O(g(n)) = {f(n): there exist positive constants c and n0 such that
0 <= f(n) <= c*g(n) for all n >= n0 }

In other words, the larger portion is chopped off. Here's a picture of the graph:

<http://thenewobjective.com/cljs/asymptotic-notation.jpg>

[1] <https://www.amazon.com/Introduction-Algorithms-3rd-MIT-Press/dp/0262033844>

Ben Bacarisse

unread,
Mar 15, 2017, 1:32:46 PM3/15/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> Ben Bacarisse wrote:
>
>> That gives the asymptotic upper bound. You can't say anything about
>> small(1000) from the fact that small(n) is O(n^3).
>>
>> Here are two other function that are O(n^3):
>>
>> function small1(n) { if n < 1000000 then return 1 else return small(n) }
>>
>> function small2(n) { repeat 99 times small(n); return small(n) }
>>
>> Knowing only the asymptotic upper bound of n^3, you can't say anything
>> about the number of steps taken by small1(1000) or small2(1000).
>
>> I really do know what these notations mean. None of them can tell you
>> anything about the cost of a single call like small(1000).
>
> I read back through our discussion and did a refresher on some notes
> and I think we're talking towards two different but related things.

Probably.

> What I was trying to state was that any multiple of the growth rate of
> a function is irrelevant as they fall into the same category and
> therefore I can ignore what that multiple is:
>
> f(x) = x^2 + 2x + 1
> g(x) = 4x^2
> h(x) = x^2
>
> O(f(x)) = O(g(x)) = O(h(x)) = O(x^2)
>
> If I understand your argument correctly, you're saying that it may be
> true that f(x) is O(x^2) eventually, but at some points in the domain
> it can be far greater than that which wouldn't fall into such a
> categorization, such as:
>
> f(n) = n^2, n >= 1000
> f(n) = n^9, n < 1000

I was saying you can't say anything about a single value of f from
knowing only the order. If we know if is O(n^2) f(1000) could be 1, 99,
1x10^103 or -45. Also f(2000) could be 17, 1,000,000 or 0. You gave a
single value and said that it could be lower but not higher (because O
is an upper bound). But f can take any values at all at any of a
finite set of points and still be O(n^2).

> I wasn't sure of the answer to this, so I reviewed one of my
> Algorithms texts [1] and surprisingly on page 45 it has a graph and
> description which reflects this very issue.
>
> What we both overlooked (or forgot) is that O(f(n)) is defined with a
> minimum constraint as part of the definition that would exclude those
> lower values. So in the above function it would still be considered
> O(x^2). From p. 47 of the text:
>
> O(g(n)) = {f(n): there exist positive constants c and n0 such that
> 0 <= f(n) <= c*g(n) for all n >= n0 }
>
> In other words, the larger portion is chopped off.

I don't think I forgot or overlooked that. It was the crux of what I
was trying to say.

<snip>
--
Ben.

Michael Haufe (TNO)

unread,
Mar 15, 2017, 1:59:34 PM3/15/17
to
Ben Bacarisse wrote:

> I was saying you can't say anything about a single value of f from
> knowing only the order. If we know if is O(n^2) f(1000) could be 1, 99,
> 1x10^103 or -45. Also f(2000) could be 17, 1,000,000 or 0. You gave a
> single value and said that it could be lower but not higher (because O
> is an upper bound). But f can take any values at all at any of a
> finite set of points and still be O(n^2).

If you meant at a point less than n0, I agree per the definition.

> > I wasn't sure of the answer to this, so I reviewed one of my
> > Algorithms texts [1] and surprisingly on page 45 it has a graph and
> > description which reflects this very issue.
> >
> > What we both overlooked (or forgot) is that O(f(n)) is defined with a
> > minimum constraint as part of the definition that would exclude those
> > lower values. So in the above function it would still be considered
> > O(x^2). From p. 47 of the text:
> >
> > O(g(n)) = {f(n): there exist positive constants c and n0 such that
> > 0 <= f(n) <= c*g(n) for all n >= n0 }
> >
> > In other words, the larger portion is chopped off.
>
> I don't think I forgot or overlooked that. It was the crux of what I
> was trying to say.

If you meant below n0, then that wasn't clear to me at the time.

Ben Bacarisse

unread,
Mar 15, 2017, 11:49:48 PM3/15/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> Ben Bacarisse wrote:
>
>> I was saying you can't say anything about a single value of f from
>> knowing only the order. If we know if is O(n^2) f(1000) could be 1, 99,
>> 1x10^103 or -45. Also f(2000) could be 17, 1,000,000 or 0. You gave a
>> single value and said that it could be lower but not higher (because O
>> is an upper bound). But f can take any values at all at any of a
>> finite set of points and still be O(n^2).
>
> If you meant at a point less than n0, I agree per the definition.

No, even at a point above n0 because there is also c to consider. Take,
for example, n0 = 0 -- i.e. there is some c such that f(n) < c*n^2 for
all n >= 0. Any finite set of values for f(n) can *still* be
accommodated.

(Actually even that's a rather limited description since the notation
applies to functions in real analysis. In fact, even an infinite set of
arbitrary values for f can be accommodated provided they occur over a
bonded region of the domain.)

>> > I wasn't sure of the answer to this, so I reviewed one of my
>> > Algorithms texts [1] and surprisingly on page 45 it has a graph and
>> > description which reflects this very issue.
>> >
>> > What we both overlooked (or forgot) is that O(f(n)) is defined with a
>> > minimum constraint as part of the definition that would exclude those
>> > lower values. So in the above function it would still be considered
>> > O(x^2). From p. 47 of the text:
>> >
>> > O(g(n)) = {f(n): there exist positive constants c and n0 such that
>> > 0 <= f(n) <= c*g(n) for all n >= n0 }
>> >
>> > In other words, the larger portion is chopped off.
>>
>> I don't think I forgot or overlooked that. It was the crux of what I
>> was trying to say.
>
> If you meant below n0, then that wasn't clear to me at the time.

No, I meant above or below n0. You simply can't say anything about any
finite set of values of f(n), knowing only that f is O(n^2).

--
Ben.

Michael Haufe (TNO)

unread,
Mar 18, 2017, 2:46:55 PM3/18/17
to
Ben Bacarisse wrote:

> No, I meant above or below n0. You simply can't say anything about any
> finite set of values of f(n), knowing only that f is O(n^2).

While you could have a constant multiple at a given point, or some multiple of the lower orders, I'm not convinced that it could be some arbitrary value. If f was declared to be O(n^2), f(5) isn't going to be n^3 with f(6) being at n^50. That would be a different rate.

Ben Bacarisse

unread,
Mar 18, 2017, 4:12:44 PM3/18/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

Give me a few values for f(n) (just list them: f(0) = ?, f(6) = ? and so
on) for which you can be *certain* than f is not O(n^2).

You seem to be saying that f(5) = 125, f(6) =
808281277464764060643139600456536293376 means that f can't be O(n^2) but
that's not so. You can no idea what happens for n > 6. f might even be
O(1).

--
Ben.

Ben Bacarisse

unread,
Mar 18, 2017, 4:46:02 PM3/18/17
to
"...you can have no idea..."

--
Ben.

Michael Haufe (TNO)

unread,
Mar 26, 2017, 2:54:07 PM3/26/17
to
Ben Bacarisse wrote:

> Give me a few values for f(n) (just list them: f(0) = ?, f(6) = ? and so
> on) for which you can be *certain* than f is not O(n^2).
>
> You seem to be saying that f(5) = 125, f(6) =
> 808281277464764060643139600456536293376 means that f can't be O(n^2) but
> that's not so. You can no idea what happens for n > 6. f might even be
> O(1).

Difference across machines and the same machine of course can occur, but that doesn't prevent one from comparing the quality of an algorithm against another nor against itself when modified.

If I called f(5) over and over it wouldn't be guaranteed to take the same amount of time/memory to return an answer.

f(5) ran 5 times could be {10ms, 15ms, 12ms, 14ms, 1hour}
f(6) could be {15ms, 17ms, 12ms, 18ms, 16ms}
f(100): {107ms, 125ms, 120ms, 150ms, 135ms}

and so on. Given sufficient runs and sufficient inputs you *COULD* derive the trend of the function and assign it a categorization. But you don't have to as this analytical framework exists to eliminate such a dependency on a machine and on limited inputs.

Determining the category of a function is accomplished by evaluating it's components. For instance:

"primitive" operations are considered O(1); In other words 1-step
These include: assignment, calls, arithmetic, index lookups, reference lookups, and a few others. So if I had a function "arrayMax" I could determine it's categorization as follows:

var intList = [8,6,7,15,5,3,0,9]

var arrayMax = (xs, n) =>
n == 1 ? xs[0] :
Math.max(arrayMax(xs, n - 1),xs[n - 1]);

arrayMax(intList, intList.length)

///////////

when n == 1 the runtime is 3 // comparison + lookup + return
else it's the runtime of n - 1 for the recursive call + 7 primitives

or for clarity where T(n) is the runtime:

T(n) = 3, if n == 1
T(n) = T(n - 1) + 7, otherwise

in closed form: T(n) = 7(n - 1) + 3 = 7n - 4

So I'm claiming that 7n - 4 is O(n)

How do I prove it? I simply have to be consistent with the definition of O(n). So I'll "fill in the blanks and turn the crank"

Definition:
f(n) is O(g(n)), for f(n) <= c*g(n) where n >= n0

f(n) := arrayMax
g(n) := n

//7n - 4 <= c*n forAll n >= n0, so :
c := 7
n0 := 1

FIN.

instances of the proof:

var f = n => 7 * n - 4
var g = n => n

f(1) <= 7 * g(1) //true
f(2) <= 7 * g(2) //true
f(3) <= 7 * g(3) //true
...
f(4000) <= 7 * g(4000)
...
etc.


The above was only about evaluating time, but a similar approach is applied for space analysis.

If this doesn't help, then I still don't understand where we missing each other

Ben Bacarisse

unread,
Mar 27, 2017, 5:48:57 PM3/27/17
to
"Michael Haufe (TNO)" <t...@thenewobjective.com> writes:

> Ben Bacarisse wrote:
>
>> Give me a few values for f(n) (just list them: f(0) = ?, f(6) = ? and so
>> on) for which you can be *certain* than f is not O(n^2).
>>
>> You seem to be saying that f(5) = 125, f(6) =
>> 808281277464764060643139600456536293376 means that f can't be O(n^2) but
>> that's not so. You can no idea what happens for n > 6. f might even be
>> O(1).

It might help if you commented on this part. You seemed to be saying
that certain values for f (at some finite from the domain) were
incompatible with some big-O behaviours. Hence I asked for an example:
some finite set of values for f that shows it not to be O(n^2).

> Difference across machines and the same machine of course can occur,
> but that doesn't prevent one from comparing the quality of an
> algorithm against another nor against itself when modified.

I'm talking about mathematical functions and a mathematical notation
that describes how function may or may not behave.

<snip>
> Determining the category of a function is accomplished by evaluating
> it's components. For instance:
> "primitive" operations are considered O(1); In other words 1-step

This is true for algorithms, yes, but the notation is more general than
that. (I don't think this is the source of any confusion though.)

<snip example>
> If this doesn't help, then I still don't understand where we missing
> each other

I'm not sure how it could help. If you agree that any finite set of
function values is compatible with that function being, say, O(n^2) then
we are in agreement. You seemed to be saying otherwise at one point.

--
Ben.
0 new messages