breakpoint support on 2.0

52 views
Skip to first unread message

SASADA Koichi

unread,
Nov 29, 2012, 4:41:37 PM11/29/12
to ruby-d...@googlegroups.com
I want to continue discussion at
https://github.com/rocky/rb-threadframe/issues/3#issuecomment-10865038

Breakpoint is highly required?

--
// SASADA Koichi at atdot dot net

Rocky Bernstein

unread,
Nov 29, 2012, 8:47:13 PM11/29/12
to ruby-d...@googlegroups.com
Comments in line.

On Thu, Nov 29, 2012 at 4:41 PM, SASADA Koichi <k...@atdot.net> wrote:
I want to continue discussion at
https://github.com/rocky/rb-threadframe/issues/3#issuecomment-10865038

There is much in that thread that isn't germane to this discussion. For folks reading this group but haven't seen the thread I'll summarize the relevant part cited above.

I asked about breakpoint support and suggested for reference https://github.com/rocky/rb-threadframe/tree/master/patches/1.9.3.  Search for 130-brkpt.patch. 


ko1 replied (with my further comments added):

Your patch is not acceptable because of performance issue.


If you have benchmarks, I would really love to see that. It would also be a great thing to share with the community.

And anyone reading this, please make your own benchmarks and share them here. Thanks.

For any formal sort of benchmarking I rely on Mark Moseley who reported the impact was negligible (something in the range of 1-2% ).

My own experience in the 2 or so years of using this is that I've not noticed degree of slowness with the entire collection that would make me want me (let alone force me) to avoid using the patches. But my use might not be as heavy as others.

When I do find such slowness, there are still other things that one can do to drive down the cost such as switch the interpreter code loop. A coarse version of this is just to have two interpreters around and at the outset decide which one to use.

For those of you that do benchmarks, it would be useful to separate that breakpoint patch from other patches. And to separate runs where breakpoints are never used (the majority of the time), versus those cases  where a breakpoint is set.

Let me explain a little. There is a test to see if any breakpoint exists in an instruction sequence. If not, less testing is done.

The reason for separating this patch is that various patches incur space or time overhead when they are used. So although the question of how much overhead is incurred for all the patches is interesting, it is also helpful to get a feel for specific individual patches like this one.
 

From 2.0, Module#prepend can make breakpoint around method call.
Is it not enough?


It is not enough.
 

I ask someone using debugger on ruby.
They require:

(1) single step exec (TracePoint will help)
(2) break point for methods (Module#prepend/TracePoint will help)
(3) inspection (rb_debug_inspector will help)


I don't understand what "will help" means. It seems to imply it isn't complete. Is that right?

In Ruby 1.9.3 there already is a callback mechanism for single stepping and stopping on method calls and returns (1) and (2) above.
Unless set_trace_func() is going away, then I don't see what "help" TracePoint provides over this.

I want to postpone fine-grained break point support to 2.1.
Maybe we need to consider about instruction swap and so on.

For the newbies reading this forum,  I first brought this up 4-1/2 years ago. I was told to wait until after 1.9.1. So almost a year later I brought it up again and was told that there was no road map and asked to provide a patch.  And that formed the basis of that code.

I've considered instruction swap code. I believe rubinius used it for a while but I believe it no longer does. And my experience with what rubinius uses instead now is not without its slight problems.

That said, if people want to consider this, please by all means go ahead. The subtleties in by the implementation and API right now make me want to shy away from this, especially given that what I have works well.



Breakpoint is highly required?

The short answer is yes.

I think I understand why you ask that question, so although this is already long, I feel I want to give a more full and detailed answer. (Again, if you want the short answer, I've already given it.)

Just as Matz has said he wanted to write a programming language to be fun to use or make programmers happy, I feel the same way about debuggers. Using programming language design as an analogy I think best conveys the "truth" about how I feel here.

Suppose one were to ask, "Is not having to declare a variable before its use highly required?" Well, I suppose if one puts it this way there is a temptation to answer: of course not.

There are lots of programming languages out there that require variables to be declared before use. Those languages are used quite a lot by a lot of people. Furthermore, there are some people who feel that any language that doesn't require a person to declare a variable is a flawed programming language. But I'm pretty sure that if you were to ask Matz about whether not having to declare a variable before its use is highly required, he would say "yes".

There are ways to simulate breakpoints by filtering various kinds of events. (In Ruby at least before 2.0 these are "line", "call", "return", ...)  That is generally too slow. However people can and have put up with the slowness in the same way they can and have put up with declaring variables in some programming languages and the overhead that incurs from the programmer's standpoint.

There is an additional problems with the current "line", "call" and "return" event filtering and the way position information is recorded in MRI that is also solved by instruction based breakpoints. MRI doesn't report a position in any more detail than on a line. But there can be several statements in a line. And in a functional-style program where one has f(x).g(y).h(z) one may want to stop before the g(y) or h(z) call.

You can sort of squint your eyes and come up with some way to imagine a debugger language where you say stop at the first h() call on that line. Or as a programmer using a many of the existing debuggers, what you could do when you are stopped at that line is set a temporary breakpoint to the g() call and then run to the return of g() and step again.

But this is convoluted, cumbersome and error prone, whether the person doing the debugging has to do this or the person writing the debugger has to try to simulate that to provide a stop before the h() call. With instruction-based breakpoints this is all eliminated. (There are other obstacles but I think they are less severe)

And there is even some precedent for this in gdb where in fact you can set a breakpoint at a hardware address.

Personally, I want to use and write debuggers that are as powerful as gdb and support both high and low-level debugging. And that's partly why I model the debuggers I write on gdb.

Again, I'm sorry this is long.

 

SASADA Koichi

unread,
Nov 30, 2012, 7:21:41 AM11/30/12
to ruby-d...@googlegroups.com
(2012/11/30 10:47), Rocky Bernstein wrote:
> There is much in that thread that isn't germane to this discussion. For
> folks reading this group but haven't seen the thread I'll summarize the
> relevant part cited above.

Thanks.

> ko1 replied (with my further comments added):
>
> Your patch is not acceptable because of performance issue.
>
>
> If you have benchmarks, I would really love to see that. It would also
> be a great thing to share with the community.
>
> And anyone reading this, please make your own benchmarks and share them
> here. Thanks.

I didn't. I also happy to see.

> For any formal sort of benchmarking I rely on Mark Moseley who reported
> the impact was negligible (something in the range of 1-2% ).
>
> My own experience in the 2 or so years of using this is that I've not
> noticed degree of slowness with the entire collection that would make me
> want me (let alone force me) to avoid using the patches. But my use
> might not be as heavy as others.
>
> When I do find such slowness, there are still other things that one can
> do to drive down the cost such as switch the interpreter code loop. A
> coarse version of this is just to have two interpreters around and at
> the outset decide which one to use.
>
> For those of you that do benchmarks, it would be useful to separate that
> breakpoint patch from other patches. And to separate runs where
> breakpoints are never used (the majority of the time), versus those
> cases where a breakpoint is set.
>
> Let me explain a little. There is a test to see if /any /breakpoint
> exists in an instruction sequence. If not, less testing is done.
>
> The reason for separating this patch is that various patches incur space
> or time overhead when they are used. So although the question of how
> much overhead is incurred for all the patches is interesting, it is also
> helpful to get a feel for specific individual patches like this one.

I understand implementation with patch and your explanation.

> From 2.0, Module#prepend can make breakpoint around method call.
> Is it not enough?
>
>
> It is not enough.
>
>
> I ask someone using debugger on ruby.
> They require:
>
> (1) single step exec (TracePoint will help)
> (2) break point for methods (Module#prepend/TracePoint will help)
> (3) inspection (rb_debug_inspector will help)
>
>
> I don't understand what "will help" means. It seems to imply it isn't
> complete. Is that right?

I never try to implement debugger with APIs.
Yes, I need to try.

> In Ruby 1.9.3 there already is a callback mechanism for single stepping
> and stopping on method calls and returns (1) and (2) above.
> Unless set_trace_func() is going away, then I don't see what "help"
> TracePoint provides over this.

"callback mechanism" on this sentence is "set_trace_func"?
TracePoint is faster than "set_trace_func".


> I want to postpone fine-grained break point support to 2.1.
> Maybe we need to consider about instruction swap and so on.
>
> For the newbies reading this forum, I first brought this up 4-1/2 years
> ago <http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/21904>.
> I was told to wait until after 1.9.1. So almost a year later I brought
> it up again and was told that there was no road map and asked to provide
> a patch
> <http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/21969>.
> And that formed the basis of that code.
>
> I've considered instruction swap code. I believe rubinius used it for a
> while but I believe it no longer does. And my experience with what
> rubinius uses instead now is not without its slight problems
> <https://github.com/rubinius/rubinius/issues/629#issuecomment-10263074>.
>
> That said, if people want to consider this, please by all means go
> ahead. The subtleties in by the implementation and API right now make me
> want to shy away from this, especially given that what I have works well.

I'm sorry for my laziness.
I'm not a super person.


> Breakpoint is highly required?
>
>
> The short answer is yes.
>
> I think I understand why you ask that question, so although this is
> already long, I feel I want to give a more full and detailed answer.
> (Again, if you want the short answer, I've already given it.)
>
> Just as Matz has said he wanted to write a programming language to be
> fun to use or make programmers happy, I feel the same way about
> debuggers. Using programming language design as an analogy I think best
> conveys the "truth" about how I feel here.
>
> Suppose one were to ask, "Is not having to declare a variable before its
> use highly required?" Well, I suppose if one puts it this way there is a
> temptation to answer: of course not.
>
> There are lots of programming languages out there that require variables
> to be declared before use. Those languages are used quite a lot by a lot
> of people. Furthermore, there are some people who feel that any language
> that doesn't require a person to declare a variable is a flawed
> programming language. But I'm pretty sure that if you were to ask Matz
> about whether not having to declare a variable before its use is highly
> required, he would say "yes".

I agree with you.
"No one use them" should not be reason "Useless".
Thank you for your explanation.

Now, how to know where trace should be inserted?
line?

SASADA Koichi

unread,
Nov 30, 2012, 8:04:09 AM11/30/12
to ruby-d...@googlegroups.com
(2012/11/30 21:21), SASADA Koichi wrote:
> Now, how to know where trace should be inserted?
> line?

One more question.

Your breakpoint hack invoke new tracefunc event.
Is that enough?

Other option is registering C function (or Proc) and invoke different
one for each breakpoints.

Rocky Bernstein

unread,
Nov 30, 2012, 8:37:58 AM11/30/12
to ruby-d...@googlegroups.com

Yes. I just mean that there is already a way to register code that get run called back when "events" occur. 

"set_trace_func" is the Ruby-specific function. In dynamic languages like Ruby, Python, Perl, and POSIX shells, a callback mechanism is generally how debuggers are implemented. So that's why I used that more generic term, since I am absent-minded and often can't remember function names.


TracePoint is faster than "set_trace_func".

I think I saw some benchmarks on ruby-core that had some data. Although I accept that data, I think some of it is misguided or needs more explanation.

That benchmark shows how TracePoint is faster than "set_trace_func" when both are used. In reality, most of the time neither is used. So another benchmark would be to show the overhead when neither is used. And that might possibly be the more important thing to be concerned about.

When one is debugging a program, maximum speed is not generally a concern. Well, I write "generally" because as in the case of simulating say a gdb "finish" command without underlying run-time support, it can be so slow that it is noticeable. So much so that I in some situations I even avoided using that. 

But I have never heard any complaint over slowdowns that "set_trace_func" causes when someone starts debugging.

It is a little like worrying about the overhead that is incurred when a program has a fatal exception. The quality and the handling of the fatal exception is much more of a concern than comparing how fast the fatal-exception handlers run.

Side note: The original debug.rb that is distributed with Ruby some thought were very slow, but that was not because set_trace_func() was slow; I believe it is mostly because binding() is slow and that is used by debug.rb in its trace hook.  A large part (probably more than half) of ruby-debug's speedup was not having to call binding() but doing that on demand.


 

I've never liked the word "line" event, because it is too often confused with -- or worse means -- a line number in a file (or newline count in a string). "Statement" would be a better term, but alas my experience is that this is a little too coarse too.

I know it sounds flaky but bear with me, I'll be more precise later: "interesting well-defined stopping point" would be better. A rigorous definition of "interesting well-defined stopping point" would a place an exception can be thrown. After all, when an exception is thrown don't you want to know exactly where in a statement it was thrown?

Suppose my code is:
    f(a/b, c/d)

and a ZeroDivision error occurs. Wouldn't I want to know if this was in the a/b, or the c/d part?

So as far as where you would want to be able specify where to stop, it might be before and after a/b or c/d is computed. You might also roughly say places in a Ruby statement before and after method calls and yields. as well as the starts of statements. This is slightly different than stopping every time a particular method is called. It is roughly like going to the return or yield from a method and stopping at the next instruction after that.

I realize that in stepping inside a debugger this can get very tedious. That's why in ruby-debug and trepanning the language for stepping can be complicated. In both there is a mode "set different on"; in trepanning you can set event masks. But the simplest way to specify the exact place you want to stop at is just to give the instruction sequence offset which is a little more cumbersome for a programmer.  But it covers everything.

And while you mention "where trace" is inserted, that's brings up something else that feels awkward. From my understanding of how many other interpreters work, it is also a little bit different in YARV. (Folks, those of you who know other interpreters, please correct me if I am wrong here.)

In YARV there is this in-line "trace" instruction that most of the time is ignored. It causes the code to bloat a little bit. Often instead there is a side table that associates the "well-defined interesting stopping points" with offsets into the instruction sequence.




Rocky Bernstein

unread,
Nov 30, 2012, 8:48:01 AM11/30/12
to ruby-d...@googlegroups.com
On Fri, Nov 30, 2012 at 8:04 AM, SASADA Koichi <k...@atdot.net> wrote:
(2012/11/30 21:21), SASADA Koichi wrote:
> Now, how to know where trace should be inserted?
> line?

One more question.

Your breakpoint hack invoke new tracefunc event.
Is that enough?

I am not sure I understand the question but I'll try to answer.

I don't like set_trace_func with all of those parameters for file, line, binding, class and so on. Following something more akin to Python, all of those parameters got rolled into a "callstack" or "frame" object.

The fame object, an event ("call", "line", "raise"), and an optional argument to hold an exception object when the event is "raise" or the return object when the event is "return", all that is needed.





Other option is registering C function (or Proc) and invoke different
one for each breakpoints.

The way I've been doing things is to do the filtering in the hook. The hook can query the event type, where the program is and other things.  To the extent I can, I do this in Ruby and not in the run-time. So see rb-trace for example.

But YARV has always allowed for multiple hooks to be registered, right? In my patches I just expose more of that so you can remove a specific hook or find if a specific hook is in there.



 

Denis Ushakov

unread,
Nov 30, 2012, 9:23:08 AM11/30/12
to ruby-d...@googlegroups.com
My 2 cents on benchmarks. I've tried to measure overhead of implementing debugger using pure ruby set_trace_func on Ruby 1.9,
so I've implemented some basic context tracking stuff. 
After that I've tried to launch tests from some ruby application (unfortunately I don't remember which one). Here are results:
    none   ruby-debug-base  debase
1   3.67   5.857             47.521
2   3.664  5.923             47.88
3   3.713  6.014             48.071
4   3.697  5.868
5   3.738  5.963
6   3.653  5.906
7   3.78   5.896
8   3.753  5.918
9   3.746  5.905
10  3.767  5.898

Times are in seconds,
None represents normal run of the tests without any debugger at all.
There were no breakpoints, no catchpoints, etc. Debuggers were just capturing context info (such as file, line, binding). 
I think that's good idea on how overhead should be measured: just take some real app and check how much longer it takes to run the program when debugger is only tracing it's execution. 

Rocky Bernstein

unread,
Nov 30, 2012, 12:01:55 PM11/30/12
to ruby-d...@googlegroups.com
On Fri, Nov 30, 2012 at 9:23 AM, Denis Ushakov <dennis....@gmail.com> wrote:
My 2 cents on benchmarks. I've tried to measure overhead of implementing debugger using pure ruby set_trace_func on Ruby 1.9,
so I've implemented some basic context tracking stuff. 
After that I've tried to launch tests from some ruby application (unfortunately I don't remember which one). Here are results:
    none   ruby-debug-base  debase
1   3.67   5.857             47.521
2   3.664  5.923             47.88
3   3.713  6.014             48.071
4   3.697  5.868
5   3.738  5.963
6   3.653  5.906
7   3.78   5.896
8   3.753  5.918
9   3.746  5.905
10  3.767  5.898

Times are in seconds,
None represents normal run of the tests without any debugger at all.
There were no breakpoints, no catchpoints, etc. Debuggers were just capturing context info (such as file, line, binding). 
I think that's good idea on how overhead should be measured: just take some real app and check how much longer it takes to run the program when debugger is only tracing it's execution. 

What is the debase column?

The above timing, tracks when the ruby-debug-base19 trace hook calls are run because some sort of set_trace_func() is called close to the outset versus when hooks are not run at all. Right?

In the case of the Ruby 1.9.3 frame patches, the situation is a little different for breakpoints. Here, whether or not a debugger gem is loaded there is a little overhead per VM instruction. I am interested in what that overhead is. My gut sense is that it is not anywhere near the point where I wouldn't consider using it all the time.

I mention all of this again to point out how subtle things can be it is when looking at benchmarks. The importance of function X being 20% faster than function Y is diminished when neither function is used much or is used only in debugging.

Rocky Bernstein

unread,
Nov 30, 2012, 12:25:39 PM11/30/12
to ruby-d...@googlegroups.com
On Fri, Nov 30, 2012 at 8:48 AM, Rocky Bernstein <roc...@rubyforge.org> wrote:


On Fri, Nov 30, 2012 at 8:04 AM, SASADA Koichi <k...@atdot.net> wrote:
(2012/11/30 21:21), SASADA Koichi wrote:
> Now, how to know where trace should be inserted?
> line?

One more question.

Your breakpoint hack invoke new tracefunc event.
Is that enough?

I am not sure I understand the question but I'll try to answer.

I thought of one other aspect regarding breakpoints only invoking a central EXEC_TRACE_FUNC functions that you might have been wondering about.

One might want to associate data with a particular breakpoint. As with many other things, I think how rubinius handles this is useful to look at. There, at the Ruby level, when one sets a breakpoint one can pass an object that will be returned when that breakpoint is hit.

From a high-level standpoint, this is good because in that Breakpoint object one can store useful information like its "name", how many times the breakpoint was hit, conditions regarding when the breakpoint is to be active and so on.

But again since I like to do as much as possible in Ruby rather than the C runtime, what happens in the trepanning debuggers is that each has debugger has breakpoint manager. That breakpoint manager has individual breakpoint objects that have this information. The hook associated with that debugger (profiler, code coverage tool, or custom code) sees that a breakpoint occurred.

As I sort of indicated below, that hook gives the breakpoint manager the event, frame and optional argument info and asks the breakpoint manager if this stopping point is one that it is has been registered to be interested in.

SASADA Koichi

unread,
Nov 30, 2012, 1:26:51 PM11/30/12
to ruby-d...@googlegroups.com
(2012/11/30 22:37), Rocky Bernstein wrote:
> It is a little like worrying about the overhead that is incurred when a
> program has a fatal exception. The quality and the handling of the fatal
> exception is much more of a concern than comparing how fast the
> fatal-exception handlers run.

TracePoint#raised_exception also help you, doesn't it?

> Now, how to know where trace should be inserted?
> line?
>
> I've never liked the word "line" event, because it is too often confused
> with -- or worse means -- a line number in a file (or newline count in a
> string). "Statement" would be a better term, but alas my experience is
> that this is a little too coarse too.

Ruby's "line" is physical concept.

> I know it sounds flaky but bear with me, I'll be more precise later:
> "interesting well-defined stopping point" would be better. A rigorous
> definition of "interesting well-defined stopping point" would a place an
> exception can be thrown. After all, when an exception is thrown don't
> you want to know exactly where in a statement it was thrown?
>
> Suppose my code is:
> f(a/b, c/d)
>
> and a ZeroDivision error occurs. Wouldn't I want to know if this was in
> the a/b, or the c/d part?
>
> So as far as where you would want to be able specify where to stop, it
> might be before and after a/b or c/d is computed. You might also roughly
> say places in a Ruby statement before and after method calls and yields.
> as well as the starts of statements. This is slightly different than
> stopping every time a particular method is called. It is roughly like
> going to the return or yield from a method and stopping at the next
> instruction after that.

In case, how to know the exact place on script?
Current Iseq doesn't have column number.

> I realize that in /stepping/ inside a debugger this can get very
> tedious. That's why in /ruby-debug/ and /trepanning /the language for
> stepping can be complicated. In both there is a mode "set different on";
> in /trepanning /you can set event masks. But the simplest way to specify
> the exact place you want to stop at is just to give the instruction
> sequence offset which is a little more cumbersome for a programmer. But
> it covers everything.

My question I want to ask was:
(a) How to calculate the interesting points?
and
(b) What is the interesting points?

I understand that manipulating instruction offset is most powerful for
ISeq users.

But I don't think it is good way because VM instruction set is not
defined clearly.

This is why I propose https://bugs.ruby-lang.org/issues/6714
(without implementation).
I'll try it for next version.

> And while you mention "where trace" is inserted, that's brings up
> something else that feels awkward. From my understanding of how many
> other interpreters work, it is also a little bit different in YARV.
> (Folks, those of you who know other interpreters, please correct me if I
> am wrong here.)
>
> In YARV there is this in-line "trace" instruction that most of the time
> is ignored. It causes the code to bloat a little bit. Often instead
> there is a side table that associates the "well-defined interesting
> stopping points" with offsets into the instruction sequence.

Yes, you are right.

I need to check which is faster method.

But it should be after 2.0.0 release (feature freeze deadline).
Sorry.

SASADA Koichi

unread,
Nov 30, 2012, 1:29:12 PM11/30/12
to ruby-d...@googlegroups.com
(2012/12/01 3:26), SASADA Koichi wrote:
> My question I want to ask was:
> (a) How to calculate the interesting points?
> and
> (b) What is the interesting points?

(c) What interesting point do you provide via debugger now?

(d) What interesting point do you want to provide?


Maybe making these point is similar process as a dis-compilation.

SASADA Koichi

unread,
Nov 30, 2012, 1:30:28 PM11/30/12
to ruby-d...@googlegroups.com
(2012/12/01 2:25), Rocky Bernstein wrote:
> I am not sure I understand the question but I'll try to answer.

Thank you.

Your answer is perfect what I want to know.

SASADA Koichi

unread,
Nov 30, 2012, 1:41:45 PM11/30/12
to ruby-d...@googlegroups.com
(2012/12/01 3:26), SASADA Koichi wrote:
> But I don't think it is good way because VM instruction set is not
> defined clearly.
>
> This is why I propose https://bugs.ruby-lang.org/issues/6714
> (without implementation).
> I'll try it for next version.

Thank you many comments.

Finally, I implement ISeq#line_trace_specify() method.
http://svn.ruby-lang.org/cgi-bin/viewvc.cgi?revision=38076&view=revision

This API provides only one interesting point "line".
Similar to current "line" event.
However, using this API, you can make "specified_line" hook for specific
line.

It is limited breakpoint, as you say.
However, it is limitation of dead-line, my skill, and so on.

Feedback is welcome.
(ex: Bad naming)

(Such halfway feature is harmful?)

SASADA Koichi

unread,
Nov 30, 2012, 1:43:46 PM11/30/12
to ruby-d...@googlegroups.com
(2012/12/01 3:41), SASADA Koichi wrote:
> It is limited breakpoint, as you say.

But I feel this API is well abstracted. Users don't need to know about
bytecode.

Rocky Bernstein

unread,
Nov 30, 2012, 5:43:41 PM11/30/12
to ruby-d...@googlegroups.com
On Fri, Nov 30, 2012 at 1:41 PM, SASADA Koichi <k...@atdot.net> wrote:
(2012/12/01 3:26), SASADA Koichi wrote:
> But I don't think it is good way because VM instruction set is not
> defined clearly.
>
> This is why I propose https://bugs.ruby-lang.org/issues/6714
> (without implementation).
> I'll try it for next version.

Thank you many comments.

Finally, I implement ISeq#line_trace_specify() method.
http://svn.ruby-lang.org/cgi-bin/viewvc.cgi?revision=38076&view=revision

I don't understand why getting a list of line numbers isn't done from iseq->insn_info_table rather I guess than scanning for TRACE instructions in the entire instruction sequence.  (That's what I used to build the hash iseq#offsetlines). But this is a minor point.



This API provides only one interesting point "line".
Similar to current "line" event.
However, using this API, you can make "specified_line" hook for specific
line.

It is limited breakpoint, as you say.
However, it is limitation of dead-line, my skill, and so on.

Feedback is welcome.
(ex: Bad naming)

(Such halfway feature is harmful?)

It is better than not having anything there, so in that sense it is not harmful.

But looking at test code it feels to me weird (where I am more concerned about the function than the name). If one has enough low-level things to build from, eventually one can get what one wants or needs. 

I'm sorry I can't be more help here. I think what's needed is to take a step back which I'll write about in another post.
 

Rocky Bernstein

unread,
Nov 30, 2012, 6:46:12 PM11/30/12
to ruby-d...@googlegroups.com
On Fri, Nov 30, 2012 at 1:26 PM, SASADA Koichi <k...@atdot.net> wrote:
(2012/11/30 22:37), Rocky Bernstein wrote:
> It is a little like worrying about the overhead that is incurred when a
> program has a fatal exception. The quality and the handling of the fatal
> exception is much more of a concern than comparing how fast the
> fatal-exception handlers run.

TracePoint#raised_exception also help you, doesn't it?

Yes. And ditto for #return_value.


I just looked at that and have trepidation. Mostly based on past experience with such proposals and how they turn out. If it gets done, that would be fantastic!

My concern here is that it feels to me like there is a constant jumping from one new shiny object to another with respect to run-time introspection or debugging support without having mastered what I feel are some of the not-so-shiny basics like accurate position reporting or just being able to evaluate in the context up the call stack, or setting breakpoints to name a few things.

There seems to be this optimism that somehow some new experimental mechanism, or perhaps a change in technology/methodology (TDD/BDD, AOP, novel ways to use Ruby's runtime dynamicism, dtrace hooks) will somehow solve or obviate the need for some of what I feel are the existing deficiencies in being able to write a great debugger, or just to be able report a location accurately which I believe would be needed to write a perfect code-coverage tool. So some of the basics tend to get put off because we think that these new experimental mechanisms will address these.

Yes I realize this sounds negative, so perhaps it was a mistake to write the above. It is definitely not my intention to discourage folks for working on such new interesting, cool and experimental things. I hope it gets done. I hope it will be as fantastic as I imagine it to be. I hope it can obviate some of the existing challenges.

More positively, I am encouraged that ruby-debug's Kernel#binding_n which was available since 1.8.5 or but became impossible in a proper way in 1.9 should be addressed in 2.0 (under I think using some name other than Kernel#binding_n).  I am encouraged that call stack location information (such as has been available in Perl for 25 years) will finally be broken out into something more Object oriented than a string.  And I am encouraged that there may be some sort of better debugger support although it's inconceivable it would be the kind of thing I think would be cool or cover all of what I'd like covered. (With respect to debugger support, here the problem is partly philosophical or in approach.)
 
Reply all
Reply to author
Forward
0 new messages