...
For those of you aware of the recent releases of Cinder and Pyston,
PEP 659 might look similar.
It is similar, but I believe PEP 659 offers better interpreter
performance and is more suitable to a collaborative, open-source
development model.
Hi Terry,
On 13/05/2021 5:32 am, Terry Reedy wrote:
> On 5/12/2021 1:40 PM, Mark Shannon wrote:
>
>> This is an informational PEP about a key part of our plan to improve
>> CPython performance for 3.11 and beyond.
>
>> As always, comments and suggestions are welcome.
>
> The claim that starts the Motivation section, "Python is widely
> acknowledged as slow.", has multiple problems. While some people
[...]
> different runtime.
I broadly agree, but CPython is largely synonymous with Python and
CPython is slower than it could be.
The phrase was not meant to upset anyone.
How would you rephrase it, bearing in mind that needs to be short?
[...]
hopefully make it less of a concern.
Of course, compared to the environmental disaster that is BitCoin, it's
not a big deal.
It is still important to speed up Python though.
If a program does 95% of its work in a C++ library and 5% in Python, it
can easily spend the majority of its time in Python because CPython is a
lot slower than C++ (in general).
Hi everyone,
I would like to present PEP 659.
This is an informational PEP about a key part of our plan to improve
CPython performance for 3.11 and beyond.
For those of you aware of the recent releases of Cinder and Pyston,
PEP 659 might look similar.
It is similar, but I believe PEP 659 offers better interpreter
performance and is more suitable to a collaborative, open-source
development model.
As always, comments and suggestions are welcome.
> 3. Another example: I'm working right now on a feature to step into a
> method. To do that right now my approach is:
> - Compute the function call names and bytecode offsets in a given
> frame.
> - When a frame is called (during a frame.f_trace call), check the
> parent frame bytecode offset (frame.f_lasti) to detect if the last thing
> was the expected call (and if it was, break the execution).
>
> This seems reasonable given the current implementation, where bytecodes
> are all fixed and there's a mapping from the frame.f_lasti ... Will that
> still work with the specializing adaptive interpreter?
If you are implementing this in Python, then everything should work as
it does now.
OOI, would inserting a breakpoint at offset 0 in the callee function
work?
>
> 4. Will it still be possible to change the frame.f_code prior to
> execution from a callback set in `PyThreadState.interp.eval_frame`
> (which will change the code to add a breakpoint to the bytecode and
> later call `_PyEval_EvalFrameDefault`)? Note: this is done in the
> debugger so that Python can run without any tracing until the breakpoint
> is hit (tracing is set afterwards to actually pause the execution as
> well as doing step operations).
Since frame.f_code is read-only in Python, I assume you mean in C.
I can make no guarantees about the layout or meaning of fields in the C
frame struct, I'm afraid.
But I'm sure we can get something to work for you.
Steve
(one of the other ones)
potential 10% or 20% speedups in Python
(2) bite the bullet and write
C (or ctypes) that can do the calculations 100x as fast as a
well-tuned Python program.
Hi Terry,
On 13/05/2021 8:20 am, Terry Reedy wrote:
> On 5/12/2021 1:40 PM, Mark Shannon wrote:
>
>> This is an informational PEP about a key part of our plan to improve
>> CPython performance for 3.11 and beyond.
>
> What is the purpose of this PEP? It seems in part to be like a
> Standards Track PEP in that it proposes a new (revised) implementation
> idea for the CPython bycode interpreter. Do you not intend this to not
> constitute approval of even the principle?
I will make it a standards PEP if anyone feels that would be better.
We can implement PEP 659 incrementally, without any large changes to the
implementation or any to the language or API/ABI, so a standards PEP
didn't seem necessary to us.
However, because it is a large change to the implementation, it seemed
worth documenting and doing so in a clearly public fashion. Hence the
informational PEP.
I personally think it should be a Standards Track PEP. This PEP isn't documenting some detail like PEP 13 or some release schedule, but is instead proposing a rather major change to the interpreter which a lot of us will need to understand in order to support the code (and I do realize the entire area of "what requires a PEP and what doesn't" is very hazy).
On 25 May 2021, at 21:57, Guido van Rossum <gu...@python.org> wrote:
On Tue, May 25, 2021 at 12:34 PM Brett Cannon <br...@python.org> wrote:I personally think it should be a Standards Track PEP. This PEP isn't documenting some detail like PEP 13 or some release schedule, but is instead proposing a rather major change to the interpreter which a lot of us will need to understand in order to support the code (and I do realize the entire area of "what requires a PEP and what doesn't" is very hazy).
Now, we've done similar things before (for example, the pattern matching implementation was a long-living branch), but the difference is that for pattern matching, the implementation followed the design, whereas for the changes to the bytecode interpreter that we're undertaking here, much of the architecture will be designed as the implementation proceeds, based on what we learn during the implementation.
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/WOODDS3VR5AWKWXRZC4XU26F44H2CC4W/
On Tue, May 25, 2021 at 12:34 PM Brett Cannon <br...@python.org> wrote:I personally think it should be a Standards Track PEP. This PEP isn't documenting some detail like PEP 13 or some release schedule, but is instead proposing a rather major change to the interpreter which a lot of us will need to understand in order to support the code (and I do realize the entire area of "what requires a PEP and what doesn't" is very hazy).Does that also mean you think the design should be completely hashed out and approved by the SC ahead of merging the implementation? Given the amount of work, that would run into another issue -- many of the details of the design can't be fixed until the implementation has proceeded, and we'd end up with a long-living fork of the implementation followed by a giant merge. My preference (and my promise at the Language Summit) is to avoid mega-PRs and instead work on this incrementally.Now, we've done similar things before (for example, the pattern matching implementation was a long-living branch), but the difference is that for pattern matching, the implementation followed the design, whereas for the changes to the bytecode interpreter that we're undertaking here, much of the architecture will be designed as the implementation proceeds, based on what we learn during the implementation.Or do you think the "Standards Track" PEP should just codify general agreement that we're going to implement a specializing adaptive interpreter, with the level of detail that's currently in the PEP?
I don't recall other standards track PEPs that don't also spell out the specification of the proposal in detail.
On Tue., May 25, 2021, 12:58 Guido van Rossum, <gu...@python.org> wrote:
[...]
Or do you think the "Standards Track" PEP should just codify general agreement that we're going to implement a specializing adaptive interpreter, with the level of detail that's currently in the PEP?
This. Having this as an informational PEP that's already marked as Active seems off somehow to me. I guess it feels more "we're doing this" (which I know isn't intended) rather than "this is our plan, what do you all think? All good?"
I don't recall other standards track PEPs that don't also spell out the specification of the proposal in detail.I also am not aware of a PEP that's proposed restructuring the eval loop like this either. 😉
I'm personally fine with the detail and saying details may shift as things move forward and lessons are learned based on the scope and updating the PEP accordingly. But that's just me and I don't know if others agree (hence the reason I'm suggesting this be Standards Track).
I want to document what we are doing as publicly as possible and a PEP
seems like a good way to do that.
I also want to reiterate that the PEP doesn't propose changing the
language, libraries, Python API or C API in any way. It is just
information about how we plan to speed up the interpreter.
>
>
> I don't recall other standards track PEPs that don't also spell out
> the specification of the proposal in detail.
>
>
> I also am not aware of a PEP that's proposed restructuring the eval loop
> like this either. 😉 I'm personally fine with the detail and saying
> details may shift as things move forward and lessons are learned based
> on the scope and updating the PEP accordingly. But that's just me and I
> don't know if others agree (hence the reason I'm suggesting this be
> Standards Track).
Suppose it were a standards PEP, what would that mean if it were rejected?
Rejection of a PEP is a choice in favor of an alternative, but what is
that alternative?
You can't simply say the "status quo" as that would implicitly prevent
any development at all on the bytecode interpreter.
The Instagram team behind the Cinder project may be interested to
review Mark's work before it's merged into Python development branch.
A design PEP would be more convenient than reviewing an concrete
implementation.