Re: [RBS] Re: Micro-nature of macro benchmarks (bm_norvig_spelling.rb)

4 views
Skip to first unread message

Monty Williams

unread,
Feb 18, 2010, 11:49:04 AM2/18/10
to ruby-bench...@googlegroups.com
Shri, Evan:

Both of you have suggested improvements that would make the benchmarks more meaningful. Has there been any progress towards that goal? I know all the implementers have incredibly busy schedules, but perhaps someone in the Ruby community would be able to pitch in.

We could either document what the benchmarks actually measure, i.e. is Norvig really a "micro-benchmark for String#scan", fix such benchmarks to remove such hotspots, or come up with new and improved benchmarks.

Any ideas?

-- Monty

----- Original Message -----
From: "Evan Phoenix" <ev...@fallingsnow.net>
To: ruby-bench...@googlegroups.com
Sent: Monday, November 16, 2009 5:21:40 PM GMT -08:00 US/Canada Pacific
Subject: [RBS] Re: Micro-nature of macro benchmarks (bm_norvig_spelling.rb)


Just a quick note:

I'm currently working on a new suite which is a reorganization the existing RBS and the addition of more benchmarks. It will be release this week at rubyconf. It strives to not try and exercise every syntax element of an implementation, but rather get a broader feel for the performance.

The biggest change is the organization into tiers. Each benchmark is examined strictly to see what exactly it exercises and put into a tier which properly represents how low level it is.

It will stress that performance in tier0, the most trivial benchmarks, does not always translate to performance in higher tiers, and that all of them must be run to begin to get an accurate picture of the performance of a system.

- Evan

On Nov 16, 2009, at 4:13 PM, Shri Borde wrote:

>
> As another example, in bm_list, for high number of iterations, about 80% of the time is spent in concatentating a huge string (the string representation of all the elements in the list) in MRI. If I add a statement to cap the length of the string, the benchmark runs faster even though it is doing more computation. So this is another example of a macro benchmark degenerating to a micro-benchmark for String#<<.
>
> I could modify the benchmark to remove such unintended hotspots. However, whether this is the right approach depends on whether the goal for these shootout benchmarks is to compare Ruby implementations or if the goal is to compare different languages. If the goal is to compare Ruby implementations, then I could remove the unintended hotspots. However, a better approach will be to drop those benchmarks, and instead write new benchmarks that use large existing real-world Ruby libraries (erb, rdoc, optparse, rexml, Date, pathname, Rails, etc). If the goal is to compare different languages, then removing the unintended hotspots is not the right solution.
>
> I don't think duplicating the benchmark as both a micro and a macro benchmark is a good idea as the benchmark is not a great one to begin with. Ideally, we would remove such benchmarks (or atleast move them to a folder called "shootout" where they are considered neither micro or macro) and add other better micro and macro benchmarks, but that is not going to be easy. So assuming the main goal is to compare the Ruby implementation, I will submit patches to remove the unintended hotspots. Let me know if I should pursue any other approach...
>
> -----Original Message-----
> From: ruby-bench...@googlegroups.com [mailto:ruby-bench...@googlegroups.com] On Behalf Of rogerdpack
> Sent: Monday, November 16, 2009 8:34 AM
> To: Ruby Benchmark Suite
> Subject: [RBS] Re: Micro-nature of macro benchmarks (bm_norvig_spelling.rb)
>
>
>
>> So this is really a micro-benchmark for String#scan, and to a lesser
>> degree, Hash#+. The rest of the Ruby code hardly shows up in the
>> measurements. Should the benchmark be fixed such that most of the time
>> is spent in the other functions? The training phase could be moved to
>> a setup phase outside of the main benchmark loop.
>
>
> Perhaps it originated from here?
>
> http://norvig.com/spell-correct.html
>
> One option would be to move it to micro-benchmarks.
> Another might be to have two tests--one as is one that just shows the
> latter half.
> -r
>
>>
>> Ideallly, the macro-benchmarks would not have any single function
>> accounting for more than 5% (say) of the entire execution time.
>> Otherwise, it is measuring a narrow aspect of the Ruby implementation
>> and is not really macro. Ofcourse, it is separately useful to have
>> micro benchmarks for individual library types and methods.
>>
>> Regards
>> Shri
>
>
>
> >
>


--~--~---------~--~----~------------~-------~--~----~
The GitHub project is located at http://github.com/acangiano/ruby-benchmark-suite

You received this message because you are subscribed to the Google
Groups "Ruby Benchmark Suite" group.
To post to this group, send email to
ruby-bench...@googlegroups.com
To unsubscribe from this group, send email to
ruby-benchmark-s...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/ruby-benchmark-suite?hl=en
-~----------~----~----~----~------~----~------~--~---

Shri Borde

unread,
Feb 18, 2010, 2:39:01 PM2/18/10
to ruby-bench...@googlegroups.com
I have not done any changes as Evan had mentioned significant reorganization was coming.

The Rails and RDoc benchmarks are a good start towards moving to macro-benchmarks. Everything else should be moved to the micro-benchmarks folder IMO.

Shri, Evan:

Any ideas?

-- Monty


Just a quick note:

- Evan

--

Reply all
Reply to author
Forward
0 new messages