Hi,
I do some looping like this:
FunctionInputTuple tupleWithSecond;
while (tupleStates.contains(first)) {
tupleWithSecond = FunctionInputTuple.getInstance(
tuple.getSymbol(), tupleStates);
// do some computation, replace ?first? in the list
equivalent &= equivalence.equivalent(firstValue,
secondValue);
}
Now my question is: does it matter that I declare tupleWithSecond
outside the loop? I have the impression that if I leave the first line
out and instead add ?FunctionInputTuple? before the third line, that a
new pointer space is created on the stack for each loop. Or is the
compiler smart enough to see this and optimise it away?
TIA, H.
--
Hendrik Maryns
==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFD/FEKe+7xMGD3itQRApCNAJ0XAuO9EDzcPLMU+drVSb+7GWuRlwCeMLlI
8DO6B188P50x4WNYLLIx5MQ=
=PA5B
-----END PGP SIGNATURE-----
I have a question out of curiosity, since I have seen some of your other
posts. Are you perhaps thinking a bit too much of lists and Lisp/Scheme.
I am just curious as you are clearly demonstrating that chain of thought
in your comments and examples. My point is that it might be in the way
of utilising (or perhaps understanding) java correctly. i.e. writing
code that follows java idioms instead if Lisp, essentially using the
wrong language for how you think.
Just a thought though.
/tom
> I do some looping like this:
>
> FunctionInputTuple tupleWithSecond;
> while (tupleStates.contains(first)) {
> tupleWithSecond = FunctionInputTuple.getInstance(
> tuple.getSymbol(), tupleStates);
> // do some computation, replace ?first? in the list
> equivalent &= equivalence.equivalent(firstValue,
> secondValue);
> }
>
> Now my question is: does it matter that I declare tupleWithSecond
> outside the loop? I have the impression that if I leave the first line
> out and instead add ?FunctionInputTuple? before the third line, that a
> new pointer space is created on the stack for each loop. Or is the
> compiler smart enough to see this and optimise it away?
In general you should limit the scope of names, so
declaring the variable inside the loop is preferable.
I don't know the technical details and the two cases
*may* in principle differ performance wise (I don't
think they do), but this whould be what is commonly
known as "premature optimization" and should be avoided
anyway.
tom fredriksen schreef:
I do have some interest in Lisp, but I have never used it. Thing is,
part of what I am programming now (tree automata and manipulations
thereof) has been programmed before in Lisp, and I look at that code
from time to time to get some inspiration. So I might be contaminated
there. But I only do Java (until now). Just from time to time, I see
some useful constructs there, and would want to use them in Java too...
The question about lists could just as well be stated as a comparison
with Perl lists. This question is about coding style and performance, I
don?t see a link to Lisp here. In Lisp, this would probably involve
mapcars and stuff. If you want to enlighten me on the Java idiom of a
loop like the above, please feel free to do so.
> Just a thought though.
You could have mailed this to me privately. Unfortunately, you
e-mailaddress does not work, so I have to clutter the NG with this.
H.
--
Hendrik Maryns
==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFD/F78e+7xMGD3itQRArTNAJ9EBLlgoTxFEQKNvB58VSgydQhbbwCfSTob
u6cieZl734xDl1bE12crQSU=
=t6mM
-----END PGP SIGNATURE-----
Seconding that: the JVM is very likely to optimize this situation. I
definitively would prefer to limit the scope as much as possible to avoid
errors from accidetally reusing the same value and enabling proper data flow
analysis of the compiler.
Kind regards
robert
I think it wise to write logically correct code, simple code. Run it,
test it. If there are performance complaints think of them as a
modification of service level agreeement. In this way you are not
optimizing, but meeting requirements.
There is a quote in Jon Bently's book on optimization that comes to
mind, it goes like:
Two Rules of Optimization:
Rule #1: Don't optimize.
Rule #2: [For experts only] Don't optimize yet.
If we seperatly consider experts and non-experts:
For non-experts: don't optimize means that at no point in time they
should optimize.
For experts: Still don't optimize, but you can contemplate
possibilities. If there ends up being a performance reason for
modifing source you'll have some plans, but you did not do it yet.
When the performance reason comes you'll be meeting requirements and
the fact that you are optimizing will be secondary.
This is part of how I view optimizing. The other main part of my
optimization thinking is regression tests. That is: the slow, simple,
correct version of code stays and is used to verify results from any
attempted sped up versions.
All the best,
Opalinski
opa...@gmail.com
http://www.geocities.com/opalpaweb/
> Now my question is: does it matter that I declare tupleWithSecond
> outside the loop?
Doesn't make any difference from an efficiency point-of-view (with some minor
caveats). The same bytecode is emitted whether you declare the variable inside
the loop or outside. There are, of course, good software engineering reasons
for declaring the variable in the most restricted scope possible.
-- chris
> Now my question is: does it matter that I declare tupleWithSecond
> outside the loop?
Slightly curious (and ignorant of compiler optimisations), I ran the
following code a few times with the appropriate lines tweaked.
Over 10 million iterations, the difference between declaring inside and
outside the loop was 0.7%. That's worth the benefits of variable-scope
minimising.
.ed
--
www.EdmundKirwan.com - Home of The Fractal Class Composition.
class Loop {
private class Test {
private int index = 0;
Test() {
}
void execute() {
for (int i = 0; i < 100; i++) {
index++;
}
}
}
public static void main(String[] args) {
new Loop().execute();
}
void execute() {
long startTime = System.currentTimeMillis();
// Test test = null;
for (int i = 0; i < 10000000; i++) {
Test test = new Test();
test.execute();
}
System.out.println("Time elapsed: " + (System.currentTimeMillis() -
startTime));
}
}
I use, as a general rule:
declare local variables at first use.
Enough space for all local variables in a method are allocated on the stack
frame when the method starts, regardless of their source scope. So where you
declare a local variable doesn't (usually) matter to performance. There are
a couple of caveats here: if you declare variables inside, say, loops, and
you do this more than once in a method, odds are that the slots allocated to
those variables will be re-used. Likewise, local variables are not
de-allocated until the /method/ terminates, not the block that they are
declared in. That means that a local variable that holds a reference to an
object will prevent that object from being garbage collected until method
termination -- even if that variable goes out of scope earlier.
-- Adam Maass
> Slightly curious (and ignorant of compiler optimisations), I ran the
> following code a few times with the appropriate lines tweaked.
>
> Over 10 million iterations, the difference between declaring inside and
> outside the loop was 0.7%. That's worth the benefits of variable-scope
> minimising.
This was obviously nonsense, so I ran it myself. And I get exactly[*] the same
result ! Declaring 'test' inside the loop make it run ~7% slower than with it
outside.
([*] I'm assuming you dropped a decimal point somewhere.)
(That was using a 1.5.0 client VM. I modified the code to run execute()
repeatedly so as to allow time for the JITer to do its stuff. I also checked
that the inner loops weren't being optimised away.)
Looking at the bytecodes the only difference between the two (besides the
explicit assignment of null which is outside the loop and therefore irrelevant)
is that the stack slots used for variables "test" and "i" are interchanged (if
'test' is declared outside, then it gets stack slot 3 and 'i' uses slot 4,
otherwise 'test' uses 4 and 'i' uses 3). There are /no/ other differences.
Considering that the loop in execute() is actually quite long (what with object
creation, initialisation, and the nested loop) I find it quite surprising that
such a small change has any detectable effect. Shocking, in fact.
Live and learn...
-- chris
On my system, running under Java 1.5.0_06, the one with
'test' declared inside the loop runs about .7% faster, fairly
consistently (on a second call to new Loop().execute()).
With the attached test I get
server vm
50000000 iterations
Rehearsal
Outer: 1015ms
Local: 953ms
0.9389162561576355 local/outer
Test
Outer: 969ms
Local: 938ms
0.9680082559339526 local/outer
client vm
50000000 iterations
Rehearsal
Outer: 953ms
Local: 922ms
0.9674711437565582 local/outer
Test
Outer: 922ms
Local: 922ms
1.0 local/outer
On Windows Java 1.4.2_06
On all tests the worst case that the "local" variant was 1.7% slower than
the "outer" in server mode.
robert
Chris Uppal schreef:
That?s the answer I sought (and hoped) for, now I can assuredly follow
the smallest scope advises, which I, of course, heartily agree with.
Thanks all, H.
--
Hendrik Maryns
==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFD/aSie+7xMGD3itQRAkpPAJ9GpsHg2wUYSHCpdj8rEiswVJTNsQCffAQh
yLUAhl2Fs5V8BpH+cf8uKQA=
=9Rv9
-----END PGP SIGNATURE-----
> > This was obviously nonsense, so I ran it myself. And I get exactly[*]
> > the same
> > result ! Declaring 'test' inside the loop make it run ~7% slower than
> > with it
> > outside.
>
> On my system, running under Java 1.5.0_06, the one with
> 'test' declared inside the loop runs about .7% faster, fairly
> consistently (on a second call to new Loop().execute()).
Odd. On mine, running 1.5.0 (ie. first release) looping explicitly so that the
GC load stabilises and the JIT is warmed up, I repeatably get that declaring
the 'test' variable inside the loop slows it down by 7.5%.
Here's the raw numbers. Tthe lines starting with 'W' are warmup runs which are
not included in the average. The first column is the 'inner' time, the second
is the 'outer':
W: 4977 4757
W: 5027 4687
W: 5127 4677
0: 5057 4747
1: 5057 4667
2: 5127 4677
3: 5037 4777
4: 5027 4677
5: 5037 4767
6: 5047 4667
7: 5138 4666
8: 5048 4756
9: 5038 4686
10: 5108 4696
11: 5028 4767
12: 5047 4666
13: 5138 4667
14: 5047 4767
15: 5037 4677
16: 5077 4727
17: 5037 4677
18: 5127 4677
19: 5037 4767
Inner mean: 5064
Outer mean: 4708
And I'll append the code.
-- chris
============================
class Loop
{
static final int WARMUP = 3; // sufficient by experiment
static final int LOOPS = 20;
private class Test
{
private int index = 0;
void execute()
{
for (int i = 0; i < 100; i++)
index++;
}
}
public static void
main(String[] args)
{
Loop loop = new Loop();
long totalInner = 0;
long totalOuter = 0;
long inner = 0;
long outer = 0;
for (int i = 0; i < WARMUP; i++)
{
inner = loop.executeInner();
totalInner += inner;
outer = loop.executeOuter();
totalOuter += outer;
System.out.println("W:\t" + inner + "\t" + outer);
}
totalInner = totalOuter = 0;
for (int i = 0; i < LOOPS; i++)
{
inner = loop.executeInner();
totalInner += inner;
outer = loop.executeOuter();
totalOuter += outer;
System.out.println(i + ":\t" + inner + "\t" + outer);
}
System.out.println("Inner mean: " + totalInner / LOOPS);
System.out.println("Outer mean: " + totalOuter / LOOPS);
}
long
executeInner()
{
long start = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++)
{
Test test = new Test();
test.execute();
}
return System.currentTimeMillis() - start;
}
long
executeOuter()
{
long start = System.currentTimeMillis();
Test test = null;
for (int i = 0; i < 10000000; i++)
{
test = new Test();
test.execute();
}
return System.currentTimeMillis() - start;
}
}
============================
Chris Uppal schreef:
Interestingly, on a 64-bit machine, but with a 32-bit JVM, I get the
following. Anyone care to explain?
W: 2598 2515
W: 2449 2447
W: 2446 2446
0: 2448 2447
1: 2446 2447
2: 2448 2446
3: 2447 2446
4: 2448 2447
5: 2446 2448
6: 2447 2446
7: 2436 2433
8: 2433 2429
9: 2430 2431
10: 2439 2435
11: 2435 2434
12: 2432 2430
13: 2429 2430
14: 2436 2438
15: 2436 2435
16: 2430 2434
17: 2429 2430
18: 2435 2438
19: 2435 2435
Inner mean: 2438
Outer mean: 2437
And then, using a 64-bit JVM:
W: 299 281
W: 228 211
W: 208 193
0: 184 180
1: 187 180
2: 195 180
3: 183 180
4: 185 180
5: 185 182
6: 181 183
7: 182 182
8: 181 182
9: 183 182
10: 184 183
11: 184 180
12: 185 180
13: 189 180
14: 185 182
15: 183 188
16: 183 182
17: 182 183
18: 182 185
19: 181 182
Inner mean: 184
Outer mean: 181
Oops! Quite a bit faster, I guess I?ll use that one from now on!
Ah, I wish I could afford a computer like this one myself...
I guess the difference just gets negligible because this machine is so fast.
So I added a variable
static final int TEST_LOOPS = 10000;
And changed this little piece of code:
private class Test
{
private int index = 0;
void execute()
{
for (int i = 0; i < TEST_LOOPS; i++)
index++;
}
}
To get the following result (with the 64-bit jvm):
W: 6551 6497
W: 6551 6553
W: 6583 6437
0: 6525 6456
1: 6533 6470
2: 6536 6472
3: 6609 6429
4: 7046 6559
5: 6695 6553
6: 6833 6767
7: 6512 6423
8: 6500 6420
9: 6521 6787
10: 6490 6409
11: 6488 6324
12: 6499 6331
13: 6498 6465
14: 6488 6309
15: 6496 6546
16: 6505 6410
17: 6518 6436
18: 6635 6448
19: 6499 6523
Inner mean: 6571
Outer mean: 6476
So indeed this confirms you results, though, as can be seen, not
consistently from run to run. This might be a bit biased, as I did do
other things in the background while running the test.
Even more: with TEST_LOOPS = 10, I get the following, which seems to be
due to the creation of the objects, as the 10 loops can barely be
significant (notice the small difference to 100 loops):
inner outer
W: 255 258
W: 214 208
W: 211 211
0: 211 205
1: 202 189
2: 187 171
3: 173 161
4: 163 171
5: 166 156
6: 162 183
7: 198 196
8: 191 178
9: 181 163
10: 162 153
11: 159 155
12: 158 153
13: 158 155
14: 157 157
15: 158 154
16: 158 154
17: 154 158
18: 156 158
19: 154 158
Inner mean: 170
Outer mean: 166
Cheers, H.
--
Hendrik Maryns
==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFD/bGue+7xMGD3itQRAia/AJkBX0Ez/3x7zcPey2EwVyosCyZTsACfdPSP
evfWJ0aJZwhBo14VSvOr9po=
=ftbT
-----END PGP SIGNATURE-----
> Odd. On mine, running 1.5.0 (ie. first release) looping explicitly so
> that the GC load stabilises and the JIT is warmed up, I repeatably get
> that declaring the 'test' variable inside the loop slows it down by 7.5%.
A few other datapoints:
I tried the same code under 1.4.2_b28 on the same machine. The difference is
~4%.
I tried it under 1.5.0_5 on a different machine. The difference is ~3%.
Given that, and the figures that Larry posted, it seems to be a quirk of how
the generated code interacts with the specific machine.
What I still can't get over is how /big/ the effect is -- remember that we are
not in the inner loop.
I don't plan to change my coding style, though ;-)
-- chris
> Interestingly, on a 64-bit machine, but with a 32-bit JVM, I get the
> following. Anyone care to explain?
> [...]
> Inner mean: 2438
> Outer mean: 2437
>
> And then, using a 64-bit JVM:
> [...]
> Inner mean: 184
> Outer mean: 181
>
> Oops! Quite a bit faster, I guess I?ll use that one from now on!
That sounds as if it might be the 64-bit JMV running automatically in -server
configuration (which seems to optimise most, but not all, of the test away[*]).
Try giving it an explicit -client flag.
[*] At least it does on my machines under both 1.4 and 1.5. But not,
apparently, on whatever machine Robert's using, even though that is a Windows
box like mine. Very odd...)
-- chris
Win 2k Server, P4 1.8GHz, 2GB main mem.
I changed the test and also increased mem with -Xmx1024m -Xms1024m
-server
100000000 iterations
Rehearsal
Outer: 1437ms
Local: 1453ms
1.011134307585247 local/outer
Outer: 1391ms
Local: 1390ms
0.9992810927390366 local/outer
1.0053041018387554 local/outer
Test
Outer: 1297ms
Local: 1282ms
0.9884348496530455 local/outer
Outer: 1296ms
Local: 1282ms
0.9891975308641975 local/outer
Outer: 1297ms
Local: 1391ms
1.0724749421742483 local/outer
Outer: 1297ms
Local: 1265ms
0.9753276792598303 local/outer
Outer: 1297ms
Local: 1281ms
0.9876638396299152 local/outer
Outer: 1282ms
Local: 1343ms
1.047581903276131 local/outer
Outer: 1313ms
Local: 1297ms
0.9878141660319878 local/outer
Outer: 1282ms
Local: 1281ms
0.999219968798752 local/outer
Outer: 1281ms
Local: 1297ms
1.0124902419984387 local/outer
Outer: 1297ms
Local: 1281ms
0.9876638396299152 local/outer
1.0047144292449184 local/outer
-client
100000000 iterations
Rehearsal
Outer: 1469ms
Local: 1391ms
0.9469026548672567 local/outer
Outer: 1390ms
Local: 1375ms
0.9892086330935251 local/outer
0.9674711437565582 local/outer
Test
Outer: 1375ms
Local: 1359ms
0.9883636363636363 local/outer
Outer: 1375ms
Local: 1391ms
1.0116363636363637 local/outer
Outer: 1422ms
Local: 1390ms
0.9774964838255977 local/outer
Outer: 1391ms
Local: 1391ms
1.0 local/outer
Outer: 1407ms
Local: 1515ms
1.0767590618336886 local/outer
Outer: 1438ms
Local: 1437ms
0.9993045897079277 local/outer
Outer: 1375ms
Local: 1375ms
1.0 local/outer
Outer: 1344ms
Local: 1391ms
1.0349702380952381 local/outer
Outer: 1375ms
Local: 1359ms
0.9883636363636363 local/outer
Outer: 1375ms
Local: 1359ms
0.9883636363636363 local/outer
1.006485551632197 local/outer
I guess differences may be due to GC kicking in, clock inaccuracy or such.
IMHO they seem neglectible.
robert
> I repeatably get
> that declaring the 'test' variable inside the loop slows it down by 7.5%.
I'm sorry to be continually following myself up, but here's another datum:
I tried changing:
long start = System.currentTimeMillis();
to:
long x = System.currentTimeMillis();
long start = System.currentTimeMillis();
in both executeInner() and executeOuter(). The variable 'x' is not otherwise
referenced, but does not seem to be optimised away. That exactly reversed the
difference between the two methods. I now suspect that it's to do with what
positions variables end up at in the physical RAM used for the stack. Whether
its an alignment effect, or some oddity of caching I don't know and don't plan
to find out ;-)
-- chris
> I guess differences may be due to GC kicking in, clock inaccuracy or such.
> IMHO they seem neglectible.
Do you mean the differences in overall speed between -client and -server ? If
so then they are certainly /much/ smaller than the order-of-magnitude
differences I see for my test. (using 1.5 GHz, WinXP sp1, jdk 1.4.2 or 1.5.0)
I presume it's something to do with the different structures of our test code
in terms of what loops are in what methods, and which methods get called /in/
loops.
-- chris
>
>Now my question is: does it matter that I declare tupleWithSecond
>outside the loop? I have the impression that if I leave the first line
>out and instead add ?FunctionInputTuple? before the third line, that a
>new pointer space is created on the stack for each loop.
all slots on the stack frame for locals are allocated by a single
addition(subtraction) of the stack pointer on entering the method.
The advantage of putting them inside:
1. helps an optimiser. It know they can't be used in any way outside
the loop so it does not need to worry about saving them in ram in
case you jump out the loop.
2. helps others understand your code.
--
Canadian Mind Products, Roedy Green.
http://mindprod.com Java custom programming, consulting and coaching.
>> Over 10 million iterations, the difference between declaring inside and
>> outside the loop was 0.7%. That's worth the benefits of variable-scope
>> minimising.
try running the benchmark several times. the wobble! Other things
going on in the background out of your control interfere.
>That sounds as if it might be the 64-bit JMV running automatically in -server
>configuration (which seems to optimise most, but not all, of the test away[*]).
>Try giving it an explicit -client flag.
See http://mindprod.com/jgloss/benchmark.html
about the problems of the optimiser kicking at different times. You
want to measure "cold" or "warm" pre-post optimise.
> long start = System.currentTimeMillis();
System.nanotime gives you more accurate timing. You are going through
the same puzzles I have been the last week. You could benefit from my
discoveries, and you might even use my timing harness which by now is
getting snazzy -- it produces pretty HTML to post the results.
I don't think so:
<quote>This method provides nanosecond precision, but not necessarily
nanosecond accuracy. No guarantees are made about how frequently values
change.</quote>
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()
Kind regards
robert
I was referring to the difference between execution times with a single VM
between several invocations of "local" and "outer" as well as between
average times for "local" and "outer". Sorry, if I wasn't clear enough.
> I presume it's something to do with the different structures of our
> test code in terms of what loops are in what methods, and which
> methods get called /in/ loops.
If you refer to differences between -server and -client, then yes. That's
likely the case.
Cheers
robert
>I was referring to the difference between execution times with a single VM
>between several invocations of "local" and "outer" as well as between
>average times for "local" and "outer". Sorry, if I wasn't clear enough.
In a pedestrian interpreter you would expect no difference with the
variable declared locally in the outer block, but you would expect a
big slowdown if you made it a static or even worse, instance variable.
> System.nanotime gives you more accurate timing.
Yes, I /know/ !!
But that accuracy hardly matters when I'm taking an average from a
significant number of runs. Especially as the time we're measuring is
in the order of several seconds. The difference between millisecond or
microsecond (or whatever I would actually get from the "nano" time
clock) resolution is trivial in this case.
And I wanted the code to run on 1.4.
> You are going through the same puzzles I have been the last week.
Nope. This how-to-do-tests-that-actually-measure-something is old hat.
Java adds a few twiddles, but they have been old hat too for several
years at least.
I'll admit that I didn't bother to estimate error bars for the results;
didn't seem worthwhile.
The issue is not how to measure the effect -- that's bleedin' obvious
-- the question is what is the cause, and how to pin that down as
tightly as possible.
-- chris
>I don't think so:
>
><quote>This method provides nanosecond precision, but not necessarily
>nanosecond accuracy. No guarantees are made about how frequently values
>change.</quote>
True, but in the real world this is implemented with a cycle counter
attached to the master clock. The actual source is You are not
interested in real world time, but clock cycle time to compare
algorithms. Pentium RDTSC provides subnanosecond resolution. Java
seem to do some sort of calibration to give you approximate
nanoseconds.
See http://mindprod.com/jgloss/time.html#RDTSC
Interesting insights! Thanks for that! I'd nevertheless rely on this
because it's not guaranteed documentation wise and because it's not needed
for the huge number of iterations.
robert
> > <quote>This method provides nanosecond precision, but not
> > necessarily nanosecond accuracy. No guarantees are made about how
> > frequently values change.</quote>
>
> True, but in the real world this is implemented with a cycle counter
> attached to the master clock.
System.nanoTime() is implementation dependent.
On Windows boxes a 1.5 JVM uses the Win32 function
QueryPerformanceCounter() if that's available (afaik it normally
(always?) is). That has no defined resolution, you have to use
QueryPerformanceFrequency() to interpret the results. On the machine
I'm typing into that's 3,579,545 beats per second.
On Linux, it seems that a 1.5 JVM would like to use clock_gettime(),
but in fact (according to the comments) makes do with gettimeofday()
since clock_gettime() isn't necessarily monotonic. I don't know what
actual resolution gettimeofday() has but the result is reported in
microseconds so it can't be better than that.
On Solaris, it does something else again, apparently based on
gethrtime(). I don't know what resolution Solaris gethrtime() has. I
found one claim that it was 50 nanoseconds. OTOH, I've founds claims
that it's based on some sort of cycle count.
-- chris
Roedy Green schreef:
> On Thu, 23 Feb 2006 16:37:57 +0100, "Robert Klemme" <bob....@gmx.net>
> wrote, quoted or indirectly quoted someone who said :
>
>> I was referring to the difference between execution times with a single VM
>> between several invocations of "local" and "outer" as well as between
>> average times for "local" and "outer". Sorry, if I wasn't clear enough.
>
> In a pedestrian interpreter
What is this? Or what do you mean?
H.
--
Hendrik Maryns
==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFD/tXge+7xMGD3itQRAmsLAJ9h7y6ioJwCAmxvQbP6laEdhLAddQCeKujc
roEy6LBodYRuUPd0w6rVYA0=
=Ptzw
-----END PGP SIGNATURE-----
Further, certainly for good 'C' compilers this style
loop1() {
variable1;
}
loop2() {
variable2;
}
allows the same storage (hopefully a register) to be used
for variable1 and 2.
BugBear
>> In a pedestrian interpreter
>
>What is this? Or what do you mean?
Something like the Java 1.0 JVM or a JVM you would find today inside
a cell phone. It interprets each byte code literally one at a time .
IT does not compilation to machine code.
>On Windows boxes a 1.5 JVM uses the Win32 function
>QueryPerformanceCounter() if that's available (afaik it normally
>(always?) is). That has no defined resolution, you have to use
>QueryPerformanceFrequency() to interpret the results. On the machine
>I'm typing into that's 3,579,545 beats per second.
What I would like to know is does Java make that adjustment for you
when you call nanotime, or it is up to you to calibrate the raw clock
cycle number? I suspect the later. This means that nanotime results
are only useful for comparing algorithms on the same hardware. If you
want to benchmark hardware you would have to use System.
currentTimeMillis();
A new pointer space will be created each time it enters the loop.
Compiler dont optimize the code at any case.
tnx
Biswajit Biswal
Biswajit Biswal uitte de volgende tekst op 02/24/2006 05:16 PM:
> Hi
>
> A new pointer space will be created each time it enters the loop.
> Compiler dont optimize the code at any case.
Are you sure? I would have thought something else from the answers I
got from others...
H.
--
Hendrik Maryns
==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFEALQ2iOEY3xKMFEERAkc6AJ9oD22wV1ZwAfT+B49Tdp3WAKA6bgCgqVyV
f0XtUCYIb6m2D03bIuJb2YE=
=ceKP
-----END PGP SIGNATURE-----
The System javadoc says:
"nanoTime()
Returns the current value of the most precise available
system timer, in nanoseconds."
not clock ticks, or anything else system dependent. Of course, the
actual resolution is system-dependent, but should always be at least as
good as currentTimeMillis.
Patricia