Hi All,
1. Huge differences in processing speed for the same task
I want to make a catalogue of how much time it takes for a specific type of
calculation or algorithm to be processed, in relation to each other. Just for
comparison. This can be very helpfull in the construction of more complex
filters, in order to choose for the right option and strategy. Therefore a made
a tool in which this can be measured inside a simple loop. The loop runs
typically about 40 million times, to create enough acccuracy. At my surprise I
notice huge differences between the same measurement in the order of 1 to 2.
Thus a specific calculation could give e.g. 250 ticks, but repeated immediately
after it could also give up to 480 ticks – and back again!
I’m aware that the internal management of a CPU can differ between
operations, but I can hardly believe the dimension of this. Is this really the
way a computer works? If it should, it will be hard to obtain reliable data. Ideas
to improve this or what could be done?
- - - - - - - - - - - - - - - - - - - - - - -
2. How to deal with BoxUnits and Pixels in the construct of a filter ?
I understand the idea to stabilize the GUI between different platforms with the
introduction of BoxUnits. But once you want to apply this automatic adaptation,
you collide quickly with an unmanageable amount of variables that even won’t
assure a reasonable result. Even Horizontal and Vertical Units can differ. In
such case do you choose the largest one?
Therefore my question: what’s the right order of steps to be followed to
make a more or less flexible GUI?
Another guess: or do you simply build 2 or 3 versions around the most actual
used resolutions, so the actual user can pick the best choice?
Best Regards,
Paul
>1. Huge differences in processing speed for the same task
>I want to make a catalogue of how much time it takes for a specific type of calculation or algorithm to be processed, in relation to each other. Just for comparison. This can be very helpfull in the construction of more complex filters, in order to >choose for the right option and strategy. Therefore a made a tool in which this can be measured inside a simple loop. The loop runs typically about 40 million times, to create enough acccuracy. At my surprise I notice huge differences between the >same measurement in the order of 1 to 2. Thus a specific calculation could give e.g. 250 ticks, but repeated immediately after it could also give up to 480 ticks – and back again!
>I’m aware that the internal management of a CPU can differ between operations, but I can hardly believe the dimension of this. Is this really the way a computer works? If it should, it will be hard to obtain reliable data. Ideas to improve this or what >could be done?
Thanks for responding, but this is besides the issue. I just wonder why a similar calculation can differ relatively so much in processing speed from one moment to another.
Paul
--
You received this message because you are subscribed to the Google Groups
"FilterMeister Mailing List (FMML)" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to filtermeiste...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/filtermeister/f82c397f-ad4a-4ab3-b5b5-e95983151367n%40googlegroups.com.
> I just wonder why a similar calculation can differ relatively so much in processing speed from one moment to another.
--
You received this message because you are subscribed to the Google Groups "FilterMeister Mailing List (FMML)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to filtermeiste...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/filtermeister/dda5634c-74e2-4101-ab0c-655ae20516ffn%40googlegroups.com.
Hi Roberto,
The ‘ticks’ are the time/value differences between two measurements with clock(
) (=FM function). One is placed just before and the second just after the execution
of an algorithm/calculation. It’s of no fundamental importance how it exactly behaves
to
real micro or nanoseconds, what’s important is the ratio between different types of algorithm/calculations, since the circumstances of measurement are equal (same amount of loops, etc.), only the type of algorithm/calculation differs.
Unexpected are the huge differences in repeatability of the same measurements,
notwithstanding a high choosen number of loops (e.g. a rounded 40 million
times, about the size of a 12 Mp image). That it differs: yes. That it (can)
differ up to 1 to 2 ?... Difficult to understand. Thus what causes this huge
deviations in processing speed? Is this a typical internal management of ROM or
something? Are there probably tricks to purge unnecessary filled memory to keep
the speed always on it shortest? If we can understand why this happens, we
could to a significant extent optimize execution times of filter processing.
Greetings,
Paul
To view this discussion on the web visit https://groups.google.com/d/msgid/filtermeister/op.1geb7fgvncx852%40roberto-asus-g11.home.