Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

dual processor

1 view
Skip to first unread message

John Brawley

unread,
Sep 4, 2005, 2:05:05 PM9/4/05
to
Greetings, all.
I have a program I'm trying to speed up by putting it on a new machine.
The new machine is a Compaq W6000 2.0 GHz workstation with dual XEON
processors.
I've gained about 7x speed over my old machine, which was a 300 MHz AMD
K6II, but I think there ought to be an even greater speed gain due to the
two XEONs.
However, the thought occurs that Python (2.4.1) may not have the ability to
take advantage of the dual processors, so my question:
Does it?
If not, who knows where there might be info from people trying to make
Python run 64-bit, on multiple processors?
Thanks!

John Brawley


--
peace
JB
jgbr...@charter.net
http://tetrahedraverse.com
NOTE! Charter is not blocking viruses,
Therefore NO ATTACHMENTS, please;
They will not be downloaded from the Charter mail server.
__Prearrange__ any attachments, with me first.


Paul Rubin

unread,
Sep 4, 2005, 2:17:32 PM9/4/05
to
"John Brawley" <jgbr...@earthlink.net> writes:
> However, the thought occurs that Python (2.4.1) may not have the ability to
> take advantage of the dual processors, so my question:
> Does it?

No.

> If not, who knows where there might be info from people trying to make
> Python run 64-bit, on multiple processors?
> Thanks!

Nobody is trying it in any serious way.

Jeremy Jones

unread,
Sep 4, 2005, 4:03:23 PM9/4/05
to pytho...@python.org, John Brawley
John Brawley wrote:

>Greetings, all.
>I have a program I'm trying to speed up by putting it on a new machine.
>The new machine is a Compaq W6000 2.0 GHz workstation with dual XEON
>processors.
>I've gained about 7x speed over my old machine, which was a 300 MHz AMD
>K6II, but I think there ought to be an even greater speed gain due to the
>two XEONs.
>However, the thought occurs that Python (2.4.1) may not have the ability to
>take advantage of the dual processors, so my question:
>Does it?
>
>

Sure, but you have to write the program to do it. One Python process
will only saturate one CPU (at a time) because of the GIL (global
interpreter lock). If you can break up your problem into smaller
pieces, you can do something like start multiple processes to crunch the
data and use shared memory (which I haven't tinkered with...yet) to pass
data around between processes. Or an idea I've been tinkering with
lately is to use a BSD DB between processes as a queue just like
Queue.Queue in the standard library does between threads. Or you could
use Pyro between processes. Or CORBA.

>If not, who knows where there might be info from people trying to make
>Python run 64-bit, on multiple processors?
>Thanks!
>
>John Brawley
>
>
>--
>peace
>JB
>jgbr...@charter.net
>http://tetrahedraverse.com
>NOTE! Charter is not blocking viruses,
>Therefore NO ATTACHMENTS, please;
>They will not be downloaded from the Charter mail server.
>__Prearrange__ any attachments, with me first.
>
>
>
>

HTH,

JMJ

Paul Rubin

unread,
Sep 4, 2005, 4:09:16 PM9/4/05
to
Jeremy Jones <zane...@bellsouth.net> writes:
> to pass data around between processes. Or an idea I've been tinkering
> with lately is to use a BSD DB between processes as a queue just like
> Queue.Queue in the standard library does between threads. Or you
> could use Pyro between processes. Or CORBA.

I think that doesn't count as using a the multiple processors; it's
just multiple programs that could be on separate boxes.
Multiprocessing means shared memory.

This module might be of interest: http://poshmodule.sf.net

Jeremy Jones

unread,
Sep 4, 2005, 4:43:20 PM9/4/05
to pytho...@python.org
Paul Rubin wrote:

>Jeremy Jones <zane...@bellsouth.net> writes:
>
>
>>to pass data around between processes. Or an idea I've been tinkering
>>with lately is to use a BSD DB between processes as a queue just like
>>Queue.Queue in the standard library does between threads. Or you
>>could use Pyro between processes. Or CORBA.
>>
>>
>
>I think that doesn't count as using a the multiple processors; it's
>just multiple programs that could be on separate boxes.
>Multiprocessing means shared memory.
>
>

I disagree. My (very general) recommendation implies multiple
processes, very likely multiple instances (on the consumer side) of the
same "program". The OP wanted to know how to get Python to "take
advantage of the dual processors." My recommendation does that. Not in
the sense of a single process fully exercising multiple CPUs, but it's
an option, nonetheless. So, in that respect, your initial "no" was
correct. But,

>This module might be of interest: http://poshmodule.sf.net
>
>
>

Yeah - that came to mind. Never used it. I need to take a peek at
that. This module keeps popping up in discussions like this one.

JMJ

William Park

unread,
Sep 5, 2005, 12:07:53 AM9/5/05
to
John Brawley <jgbr...@earthlink.net> wrote:
> Greetings, all. I have a program I'm trying to speed up by putting it
> on a new machine. The new machine is a Compaq W6000 2.0 GHz
> workstation with dual XEON processors. I've gained about 7x speed
> over my old machine, which was a 300 MHz AMD K6II, but I think there
> ought to be an even greater speed gain due to the two XEONs. However,
> the thought occurs that Python (2.4.1) may not have the ability to
> take advantage of the dual processors, so my question: Does it? If
> not, who knows where there might be info from people trying to make
> Python run 64-bit, on multiple processors? Thanks!

Break up your problem into 2 independent parts, and run 2 Python
processes. Your kernel should be SMP kernel, though.

--
William Park <openge...@yahoo.ca>, Toronto, Canada
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/

Nick Craig-Wood

unread,
Sep 5, 2005, 6:29:48 AM9/5/05
to
Jeremy Jones <zane...@bellsouth.net> wrote:
> One Python process will only saturate one CPU (at a time) because
> of the GIL (global interpreter lock).

I'm hoping python won't always be like this.

If you look at another well known open source program (the Linux
kernel) you'll see the progression I'm hoping for. At the moment
Python is at the Linux 2.0 level. Its supports multiple processors,
but has a single lock (Python == Global Interpreter Lock, Linux == Big
Kernel Lock).

Linux then took the path of splitting the BKL into smaller and smaller
locks, increasing the scalability over multiple processors.
Eventually by 2.6 we now have a fully preempt-able kernel, lock-less
read-copy-update etc.

Splitting the GIL introduces performance and memory penalties. Its
been tried before in python (I can't find the link at the moment -
sorry!). Exactly the same complaint was heard when Linux started
splitting its BKL.

However its crystal clear now the future is SMP. Modern chips seem to
have hit the GHz barrier, and now the easy meat for the processor
designers is to multiply silicon and make multiple thread / core
processors all in a single chip.

So, I believe Python has got to address the GIL, and soon.

A possible compromise (also used by Linux) would be to have two python
binaries. One with the BKL which will be faster on uniprocessor
machines, and one with a system of fine grained locking for
multiprocessor machines. This would be selected at compile time using
C Macro magic.

--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick

Alan Kennedy

unread,
Sep 5, 2005, 7:01:39 AM9/5/05
to
[Jeremy Jones]

>> One Python process will only saturate one CPU (at a time) because
>> of the GIL (global interpreter lock).

[Nick Craig-Wood]


> I'm hoping python won't always be like this.

Me too.

> However its crystal clear now the future is SMP.

Definitely.

> So, I believe Python has got to address the GIL, and soon.

I agree.

I note that PyPy currently also has a GIL, although it should hopefully
go away in the future.

"""
Armin and Richard started to change genc so that it can handle the
new external objects that Armin had to introduce to implement
threading in PyPy. For now we have a simple GIL but it is not
really deeply implanted in the interpreter so we should be able to
change that later. After two days of hacking the were finished.
Despite that it is still not possible to translate PyPy with
threading because we are missing dictionaries with int keys on the
RPython level.
"""

http://codespeak.net/pipermail/pypy-dev/2005q3/002287.html

The more I read about such global interpreter locks, the more I think
that the difficulty in getting rid of them lies in implementing portable
and reliable garbage collection.

Read this thread to see what Matz has to say about threading in Ruby.

http://groups.google.com/group/comp.lang.ruby/msg/dcf5ca374e6c5da8

One of these years I'm going to have to set aside a month or two to go
through and understand the cpython interpreter code, so that I have a
first-hand understanding of the issues.

--
alan kennedy
------------------------------------------------------
email alan: http://xhaus.com/contact/alan

Scott David Daniels

unread,
Sep 5, 2005, 8:36:38 AM9/5/05
to
Nick Craig-Wood wrote:
> Splitting the GIL introduces performance and memory penalties....

> However its crystal clear now the future is SMP. Modern chips seem to
> have hit the GHz barrier, and now the easy meat for the processor
> designers is to multiply silicon and make multiple thread / core
> processors all in a single chip.
> So, I believe Python has got to address the GIL, and soon.
However, there is no reason to assume that those multiple cores must
work in the same process. One of the biggest issues in running python
in multiple simultaneously active threads is that the Python opcodes
themselves are no longer indivisible. Making a higher level language
that allows updates work with multiple threads involves lots of
coordination between threads simply to know when data structures are
correct and when they are in transition.

Even processes sharing some memory (in a "raw binary memory" style) are
easier to write and test. You'd lose too much processor to coordination
effort which was likely unnecessary. The simplest example I can think
of is decrementing a reference count. Only one thread can be allowed to
DECREF at any given time for fear of leaking memory, even though it will
most often turn out the objects being DECREF'ed by distinct threads are
themselves distinct.

In short, two Python threads running simultaneously cannot trust that
any basic Python data structures they access are in a consistent state
without some form of coordination.

--Scott David Daniels
Scott....@Acm.Org

Nick Craig-Wood

unread,
Sep 5, 2005, 12:29:48 PM9/5/05
to
Scott David Daniels <Scott....@Acm.Org> wrote:
> Nick Craig-Wood wrote:
> > Splitting the GIL introduces performance and memory penalties....
> > However its crystal clear now the future is SMP. Modern chips seem to
> > have hit the GHz barrier, and now the easy meat for the processor
> > designers is to multiply silicon and make multiple thread / core
> > processors all in a single chip.
> > So, I believe Python has got to address the GIL, and soon.
> However, there is no reason to assume that those multiple cores must
> work in the same process.

No of course not. However if they aren't then you've got the horrors
of IPC to deal with! Which is difficult to do fast and portably. Much
easier to communicate with another thread, especially with the lovely
python threading primitives.

> One of the biggest issues in running python in multiple
> simultaneously active threads is that the Python opcodes themselves
> are no longer indivisible. Making a higher level language that
> allows updates work with multiple threads involves lots of
> coordination between threads simply to know when data structures
> are correct and when they are in transition.

Sure! No one said it was easy. However I think it can be done to all
of python's native data types, and in a way that is completely
transparent to the user.

> Even processes sharing some memory (in a "raw binary memory" style) are
> easier to write and test. You'd lose too much processor to coordination
> effort which was likely unnecessary. The simplest example I can think
> of is decrementing a reference count. Only one thread can be allowed to
> DECREF at any given time for fear of leaking memory, even though it will
> most often turn out the objects being DECREF'ed by distinct threads are
> themselves distinct.

Yes locking is expensive. If we placed a lock in every python object
that would bloat memory usage and cpu time grabbing and releasing all
those locks. However if it meant your threaded program could use 90%
of all 16 CPUs, rather than 100% of one I think its obvious where the
payoff lies.

Memory is cheap. Multiple cores (SMP/SMT) are everywhere!

> In short, two Python threads running simultaneously cannot trust
> that any basic Python data structures they access are in a
> consistent state without some form of coordination.

Aye, lots of locking is needed.

Paul Rubin

unread,
Sep 5, 2005, 12:48:01 PM9/5/05
to
Nick Craig-Wood <ni...@craig-wood.com> writes:
> > of is decrementing a reference count. Only one thread can be allowed to
> > DECREF at any given time for fear of leaking memory, even though it will
> > most often turn out the objects being DECREF'ed by distinct threads are
> > themselves distinct.
>
> Yes locking is expensive. If we placed a lock in every python object
> that would bloat memory usage and cpu time grabbing and releasing all
> those locks. However if it meant your threaded program could use 90%
> of all 16 CPUs, rather than 100% of one I think its obvious where the
> payoff lies.

Along with fixing the GIL, I think PyPy needs to give up on this
BASIC-style reference counting and introduce real garbage collection.
Lots of work has been done on concurrent GC and the techniques for it
are reasonably understood by now, especially if there's no hard
real-time requirement.

Terry Reedy

unread,
Sep 5, 2005, 2:40:59 PM9/5/05
to pytho...@python.org

"Paul Rubin" <"http://phr.cx"@NOSPAM.invalid> wrote in message
news:7xu0gz3...@ruckus.brouhaha.com...

> Along with fixing the GIL, I think PyPy needs to give up on this
> BASIC-style reference counting and introduce real garbage collection.
> Lots of work has been done on concurrent GC and the techniques for it
> are reasonably understood by now, especially if there's no hard
> real-time requirement.

I believe that gc method (ref count versus other) either is now or will be
a PyPy compile option. Flexibility in certain implementation details
leading to flexibility in host systems is, I also recall, part of their EC
funding rationale. But check their announcements, etc., for verification
and details.

Terry J. Reedy

Steve Jorgensen

unread,
Sep 5, 2005, 3:26:28 PM9/5/05
to
On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood <ni...@craig-wood.com> wrote:

>Jeremy Jones <zane...@bellsouth.net> wrote:
>> One Python process will only saturate one CPU (at a time) because
>> of the GIL (global interpreter lock).
>
>I'm hoping python won't always be like this.

I don't get that. Phyton was never designed to be a high performance
language, so why add complexity to its implementation by giving it
high-performance capabilities like SMP? You can probably get a bigger speed
improvement for most tasks by writing them in C than by running them on 2
processors in an inerpreted language.

Instead of trying to make Python into a high-performance language, why not
try to factor out the smallest possible subset of the program that really
needs the performance boost, write that as a library in C, then put all the
high-level control logic, UI, etc. in Python? The C code can then use threads
and forks if need be to benefit from SMP.

Michael Sparks

unread,
Sep 5, 2005, 4:43:07 PM9/5/05
to
Steve Jorgensen wrote:

> On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood <ni...@craig-wood.com> wrote:
>
>>Jeremy Jones <zane...@bellsouth.net> wrote:
>>> One Python process will only saturate one CPU (at a time) because
>>> of the GIL (global interpreter lock).
>>
>>I'm hoping python won't always be like this.
>

> I don't get that. Python was never designed to be a high performance


> language, so why add complexity to its implementation by giving it
> high-performance capabilities like SMP?

It depends on personal perspective. If in a few years time we all have
machines with multiple cores (eg the CELL with effective 9 CPUs on a chip,
albeit 8 more specialised ones), would you prefer that your code *could*
utilise your hardware sensibly rather than not.

Or put another way - would you prefer to write your code mainly in a
language like python, or mainly in a language like C or Java? If python,
it's worth worrying about!

If it was python (or similar) you might "only" have to worry about
concurrency issues. If it's a language like C you might have to worry
about memory management, typing AND concurrency (oh my!).
(Let alone C++'s TMP :-)

Regards,


Michael

Steve Jorgensen

unread,
Sep 5, 2005, 6:10:09 PM9/5/05
to

That argument makes some sense, but I'm still not sure I agree. Rather than
make Python programmers have to deal with concurrentcy issues in every app to
get it to make good use of the hardware it's on, why not have many of the
common libraries that Python uses to do processing take advantage of SMP when
you use them. A database server is a good example of a way we can already do
some of that today. Also, what if things like hash table updates were made
lazy (if they aren't already) and could be processed as background operations
to have the table more likely to be ready when the next hash lookup occurs.

Carl Friedrich Bolz

unread,
Sep 5, 2005, 7:45:43 PM9/5/05
to Terry Reedy, pytho...@python.org

Terry Reedy wrote:
> "Paul Rubin" <"http://phr.cx"@NOSPAM.invalid> wrote in message
> news:7xu0gz3...@ruckus.brouhaha.com...
>>Along with fixing the GIL, I think PyPy needs to give up on this
>>BASIC-style reference counting and introduce real garbage collection.
>>Lots of work has been done on concurrent GC and the techniques for it
>>are reasonably understood by now, especially if there's no hard
>>real-time requirement.
>
> I believe that gc method (ref count versus other) either is now or will be
> a PyPy compile option. Flexibility in certain implementation details
> leading to flexibility in host systems is, I also recall, part of their EC
> funding rationale. But check their announcements, etc., for verification
> and details.

At the moment it is possible to choose between a refcounting GC and the
Boehm-Demers-Weiser garbage collector (a conservative mark&sweep GC) as
an option when you translate PyPy to C (when translating to LLVM only
the Boehm collector is supported). We plan to add more sophisticated
(and exact) GCs during the next phase of the project. Some amount of
work on this was done during my Summer of Code project (see
http://codespeak.net/pypy/dist/pypy/doc/garbage_collection.html)
although the results are not yet completely integrated into the
translation process yet.

In addition we plan to add threading with some sort of more fine-grained
locking as a compile time option although it is not really clear yet how
that will work in detail :-). Right now you can translate with a GIL or
with no thread-support at all.

Carl Friedrich Bolz

Grant Edwards

unread,
Sep 5, 2005, 10:51:15 PM9/5/05
to
On 2005-09-05, Nick Craig-Wood <ni...@craig-wood.com> wrote:
> Jeremy Jones <zane...@bellsouth.net> wrote:
>> One Python process will only saturate one CPU (at a time) because
>> of the GIL (global interpreter lock).
>
> I'm hoping python won't always be like this.

Quite a few people are. :)

> So, I believe Python has got to address the GIL, and soon.

It would be nice if greater concurrency was possible, but it's
open source: Python hasn't "got" to do anything.

--
Grant Edwards grante Yow! I know how to do
at SPECIAL EFFECTS!!
visi.com

Jeremy Jones

unread,
Sep 5, 2005, 11:27:16 PM9/5/05
to pytho...@python.org, m...@cerenity.org
Michael Sparks wrote:

>Steve Jorgensen wrote:
>
>
>
>>On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood <ni...@craig-wood.com> wrote:
>>
>>
>>
>>>Jeremy Jones <zane...@bellsouth.net> wrote:
>>>
>>>
>>>> One Python process will only saturate one CPU (at a time) because
>>>> of the GIL (global interpreter lock).
>>>>
>>>>
>>>I'm hoping python won't always be like this.
>>>
>>>
>>I don't get that. Python was never designed to be a high performance
>>language, so why add complexity to its implementation by giving it
>>high-performance capabilities like SMP?
>>
>>
>
>It depends on personal perspective.
>

Ummmm....not totally. It depends on what you're doing. If what you're
doing is not going to be helped by scaling across a bunch of CPUs, then
you're just as well off if Python still has the GIL. Sort of. Steve
brings up an interesting argument of making the language do some of your
thinking for you. Maybe I'll address that momentarily....

I'm not saying I wish the GIL would stay around. I wish it would go.
As the price of computers goes down, the number of CPUs per computer
goes up, and the price per CPU in a single system goes down, the ability
to utilize a bunch of CPUs is going to become more important. And maybe
Steve's magical thinking programming language will have a ton of merit.

>If in a few years time we all have
>machines with multiple cores (eg the CELL with effective 9 CPUs on a chip,
>albeit 8 more specialised ones), would you prefer that your code *could*
>utilise your hardware sensibly rather than not.
>
>

I'm not picking on you. Trust me. But let me play devil's advocate for
a sec. Let's say we *could* fully utilize a multi CPU today with
Python. What has that bought us (as the amorphous "we" of the Python
community)? I would almost bet money that the majority of code would
not be helped by that at all. I'd almost bet that the vast majority of
Python code out there runs single threaded and would see no performance
boost whatsoever. Who knows, maybe that's why we still have the GIL.

>Or put another way - would you prefer to write your code mainly in a
>language like python, or mainly in a language like C or Java?
>

There are benefits to writing code in C and Java apart from
concurrency. Most of them are massochistic, but there are benefits
nonetheless. For my programming buck, Python wins hands down.

But I agree with you. Python really should start addressing solutions
for concurrent tasks that will benefit from simultaneously utilizing
multiple CPUs.

>If python,
>it's worth worrying about!
>
>If it was python (or similar) you might "only" have to worry about
>concurrency issues. If it's a language like C you might have to worry
>about memory management, typing AND concurrency (oh my!).
>(Let alone C++'s TMP :-)
>
>Regards,
>
>
>Michael
>
>

JMJ

Jeremy Jones

unread,
Sep 5, 2005, 11:42:38 PM9/5/05
to pytho...@python.org
Steve Jorgensen wrote:

Now, *this* is a really interesting line of thought. I've got a feeling
that it'd be pretty tough to implement something like this in a
language, though. An application like an RDBMS is one thing, an
application framework another, and a programming language is yet a
different species altogether. It'd have to be insanely intelligent
code, though. If you had bunches of Python processes, would they all
start digging into each list or generator or hash to try to predict what
the code is going to potentially need next? Is this predictive behavior
going to chew up more CPU time than it should? What about memory?
You've got to store the predictive results somewhere. Sounds great.
Has some awesomely beneficial implications. Sounds hard as anything to
implement well, though.

JMJ

Steve Jorgensen

unread,
Sep 6, 2005, 1:14:12 AM9/6/05
to
On Mon, 05 Sep 2005 23:42:38 -0400, Jeremy Jones <zane...@bellsouth.net>
wrote:

>Steve Jorgensen wrote:
>
...


>>That argument makes some sense, but I'm still not sure I agree. Rather than
>>make Python programmers have to deal with concurrentcy issues in every app to
>>get it to make good use of the hardware it's on, why not have many of the
>>common libraries that Python uses to do processing take advantage of SMP when
>>you use them. A database server is a good example of a way we can already do
>>some of that today. Also, what if things like hash table updates were made
>>lazy (if they aren't already) and could be processed as background operations
>>to have the table more likely to be ready when the next hash lookup occurs.
>>
>>
>Now, *this* is a really interesting line of thought. I've got a feeling
>that it'd be pretty tough to implement something like this in a
>language, though. An application like an RDBMS is one thing, an
>application framework another, and a programming language is yet a
>different species altogether. It'd have to be insanely intelligent
>code, though. If you had bunches of Python processes, would they all
>start digging into each list or generator or hash to try to predict what
>the code is going to potentially need next? Is this predictive behavior
>going to chew up more CPU time than it should? What about memory?
>You've got to store the predictive results somewhere. Sounds great.
>Has some awesomely beneficial implications. Sounds hard as anything to
>implement well, though.

I think you're making the concept harder than it needs to be. From what I'm
told by folks who know SMP way better than me, it's usually best to just look
for stuff that's waiting to be done and do it without regard to whether the
result will ever be used. That tends to be more efficient than trying to
figure out the most useful tasks to do.

In this case, it woiuld just be keeping a list of dirty hash tables, and
having a process that pulls the next one from the queue, and cleans it.

Paul Rubin

unread,
Sep 6, 2005, 2:31:13 AM9/6/05
to
Steve Jorgensen <nos...@nospam.nospam> writes:
> In this case, it woiuld just be keeping a list of dirty hash tables, and
> having a process that pulls the next one from the queue, and cleans it.

If typical Python programs spend so enough time updating hash tables
for a hack like this to be of any benefit, Python itself is seriously
mis-designed and needs to be fixed.

Steve Jorgensen

unread,
Sep 6, 2005, 2:57:39 AM9/6/05
to
On 05 Sep 2005 23:31:13 -0700, Paul Rubin <http://phr...@NOSPAM.invalid>
wrote:

I dunno - you might be right, and you might be wrong. I was just pointing out
that there may be standard operations that can be made lazy and benefit from
background tasks to complete before they are needed for use by the Python
code.

Given that Python is highly dependent upon dictionaries, I would think a lot
of the processor time used by a Python app is spent in updating hash tables.
That guess could be right or wrong, bus assuming it's right, is that a design
flaw? That's just a language spending most of its time handling the
constructs it is based on. What else would it do?

Paul Rubin

unread,
Sep 6, 2005, 3:20:23 AM9/6/05
to
Steve Jorgensen <nos...@nospam.nospam> writes:
> Given that Python is highly dependent upon dictionaries, I would
> think a lot of the processor time used by a Python app is spent in
> updating hash tables. That guess could be right or wrong, bus
> assuming it's right, is that a design flaw? That's just a language
> spending most of its time handling the constructs it is based on.
> What else would it do?

I don't believe it's right based on half-remembered profiling discussions
I've seen here. I haven't profiled CPython myself. However, if tuning
the rest of the implementation makes hash tables a big cost, then the
implementation, and possibly the language, should be updated to not
have to update hashes so much. For example,

x.y = 3

currently causes a hash update in x's internal dictionary. But either
some static type inference or a specializing compiler like psyco could
optimize the hash lookup away, and just update a fixed slot in a table.

Michael Sparks

unread,
Sep 6, 2005, 3:57:14 AM9/6/05
to
Jeremy Jones wrote:

> Michael Sparks wrote:
>>Steve Jorgensen wrote:
>>>On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood <ni...@craig-wood.com> wrote:
>>>
>>>
>>>
>>>>Jeremy Jones <zane...@bellsouth.net> wrote:
>>>>> One Python process will only saturate one CPU (at a time) because
>>>>> of the GIL (global interpreter lock).
>>>>I'm hoping python won't always be like this.

>>>I don't get that. Python was never designed to be a high performance
>>>language, so why add complexity to its implementation by giving it
>>>high-performance capabilities like SMP?

>>It depends on personal perspective.
>>
> Ummmm....not totally. It depends on what you're doing.

Yes, it does. Hence why I said personal perspective.

> Sort of. Steve
> brings up an interesting argument of making the language do some of your
> thinking for you. Maybe I'll address that momentarily....

Personally I think that the language and tools will have to help. I'm
working on the latter, I hope the GIL goes away to help with the former,
but does so in an intelligent manner. Why am I working on the latter?

I work with naturally concurrent systems all the time, and they're
concurrent not for performance reasons, but simply because that's what they
are. And as a result I want concurrency easy to deal with, and efficiency as
a secondary concern. However in that scenario, having multiple CPU's not
being utilised sensibly *is* a concern to me.

> I'm not saying I wish the GIL would stay around. I wish it would go.
> As the price of computers goes down, the number of CPUs per computer
> goes up, and the price per CPU in a single system goes down, the ability
> to utilize a bunch of CPUs is going to become more important.

> And maybe
> Steve's magical thinking programming language will have a ton of merit.

I see no reason to use such derisory tones, though I'm sure you didn't mean
it that way. (I can see you mean it as extreme skepticism though :-)

>>If in a few years time we all have
>>machines with multiple cores (eg the CELL with effective 9 CPUs on a chip,
>>albeit 8 more specialised ones), would you prefer that your code *could*
>>utilise your hardware sensibly rather than not.
>>
>>

> But let me play devil's advocate for


> a sec. Let's say we *could* fully utilize a multi CPU today with
> Python.

...


> I would almost bet money that the majority of code would
> not be helped by that at all.

Are you so sure? I suspect this is due to you being used to writing code
that is designed for a single CPU system. What if you're basic model of
system creation changed to include system composition as well as
function calls? Then each part of the system you compose can potentially
run on a different CPU. Take the following for example:

(sorry for the length, I prefer real examples :-)

Graphline(
EXIT = ExceptionRaiser("FORCED SYSTEM QUIT"),
MOUSE = Multiclick(caption="",
position=(0,0),
transparent=True,
msgs = [ "", "NEXT", "FIRST", "PREV", "PREV","NEXT" ],
size=(1024,768)),
KEYS = KeyEvent(outboxes = { "slidecontrol" : "Normal place for message",
"shutdown" : "Place to send some shutdown messages",
"trace" : "Place for trace messages to go",
},
key_events = {112: ("PREV", "slidecontrol"),
110: ("NEXT","slidecontrol"),
113: ("QUIT", "shutdown"),
}),
SPLITTER = Splitter(outboxes = {"totimer" : "For sending copies of key events to the timer",
"tochooser" : "This is the primary location for key events",
}),
TIMER = TimeRepeatMessage("NEXT",3),
FILES = Chooser(items = files, loop=True),
DISPLAY = Image(size=(1024,768),
position=(0,0),
maxpect=(1024,768) ),
linkages = {
("TIMER", "outbox") : ("FILES", "inbox"),

("MOUSE", "outbox") : ("SPLITTER", "inbox"),
("KEYS", "slidecontrol") : ("SPLITTER", "inbox"),
("SPLITTER", "tochooser") : ("FILES", "inbox"),
("SPLITTER", "totimer") : ("TIMER", "reset"),

("KEYS", "shutdown") : ("EXIT", "inbox"),
("FILES", "outbox") : ("DISPLAY", "inbox"),
}
).run()

What does that do? Its a slideshow program for display pictures. There's a
small amount of setup before this (identifying files for display, imports, etc),
but that's by far the bulk of the system.

That's pure python code (aside from pygame), and the majority of the code
is written single threaded, with very little concern for concurrency. However
the code above will naturally sit on a 7 CPU system and use all 7 CPUs (when
we're done). Currently however we use generators to limit the overhead in
a single CPU system, though if the GIL was eliminated sensibly, using threads
would allow the same code above to run on a multi-CPU system efficientally.

It probably looks strange, but it's really just a logical extension of the
Unix command line's pipelines to allow multiple pipelines. Similarly, from
a unix command line perspective, the following will automatically take
advantage of all the CPU's I have available:

(find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort

And a) most unix sys admins I know find that easy (probably the above
laughable) b) given a multiprocessor system will probably try to maximise
pipelining c) I see no reason why sys admins should be the only people
writing programs who use concurrency without thinking about it :-)

I *do* agree it takes a little getting used to, but then I'm sure the same
was true for many people who learnt OOP after learning to program. Unlike
people who learnt OO at the time they started learning programming and just
see it as a natural part of a language.

> There are benefits to writing code in C and Java apart from
> concurrency. Most of them are massochistic, but there are benefits
> nonetheless. For my programming buck, Python wins hands down.
>
> But I agree with you. Python really should start addressing solutions
> for concurrent tasks that will benefit from simultaneously utilizing
> multiple CPUs.

That's my point too. I don't think our opinions really diverge that far :)

Best Regards,


Michael.

Jeremy Jones

unread,
Sep 6, 2005, 8:40:50 AM9/6/05
to pytho...@python.org, m...@cerenity.org
Michael Sparks wrote:

>Jeremy Jones wrote:
>
>
<snip>

>
>
>>And maybe
>>Steve's magical thinking programming language will have a ton of merit.
>>
>>
>
>I see no reason to use such derisory tones, though I'm sure you didn't mean
>it that way. (I can see you mean it as extreme skepticism though :-)
>
>

None of the above, really. I thought it was a really great idea and
worthy of pursuit. In my response back to Steve, the most skeptical
thing I said was that I think it would be insanely difficult to
implement. Maybe it wouldn't be as hard as I think. And according to a
follow-up by Steve, it probably wouldn't.

<snip>

>>I would almost bet money that the majority of code would
>>not be helped by that at all.
>>
>>
>
>Are you so sure? I suspect this is due to you being used to writing code
>that is designed for a single CPU system.
>

Not really. I've got a couple of projects in work that would benefit
tremendously from the GIL being lifted. And one of them is actually
evolving into a funny little hack that will allow easy persistent
message passing between processes (on the same system) without having to
mess around with networking. I'm betting this is the case just because
of reading this list, the tutor list, and interaction with other Python
programmers.

<snip>

>That's my point too. I don't think our opinions really diverge that far :)
>
>

We don't. Again (as we have both stated), as systems find themselves
with more and more CPUs onboard, it becomes more and more absurd to have
to do little hacks like what I allude to above. If Python wants to
maintain its position in the pantheon of programming languages, it
really needs to 1) find a good clean way to utilize muti-CPU machines
and 2) come up with a simple, consistent, Pythonic concurrency paradigm.

>Best Regards,
>
>
>Michael.
>
>
>
Good discussion.


JMJ

Jorgen Grahn

unread,
Sep 6, 2005, 1:22:19 PM9/6/05
to
On Mon, 05 Sep 2005 21:43:07 +0100, Michael Sparks <m...@cerenity.org> wrote:
> Steve Jorgensen wrote:
...
>> I don't get that. Python was never designed to be a high performance
>> language, so why add complexity to its implementation by giving it
>> high-performance capabilities like SMP?
>
> It depends on personal perspective. If in a few years time we all have
> machines with multiple cores (eg the CELL with effective 9 CPUs on a chip,
> albeit 8 more specialised ones), would you prefer that your code *could*
> utilise your hardware sensibly rather than not.
>
> Or put another way - would you prefer to write your code mainly in a
> language like python, or mainly in a language like C or Java? If python,
> it's worth worrying about!

Mainly in Python, of course. But it still feels like a pretty perverted idea
to fill a SMP system with something as inefficient as interpreting Python
code!

(By the way, I don't understand why a computer should run one program at
a time all the time. Take a time-sharing system where lots of people are
logged in and do their work. Add a CPU there and you'll have an immediate
performance gain, even if noone is running programs that are optimized
for it!)

I feel the recent SMP hype (in general, and in Python) is a red herring. Why
do I need that extra performance? What application would use it? Am I
prepared to pay the price (in bugs, lack of features, money, etc) for
someone to implement this? There's already a lot of performance lost in
bloatware people use everyday; why are we not paying the much lower price
for having that fixed with traditional code optimization?

I am sure some applications that ordinary people use could benefit from SMP
(like image processing). But most tasks don't, and most of those that do can
be handled on the process level. For example, if I'm compiling a big C
project, I can say 'make -j3' and get three concurrent compilations. The
Unix shell pipeline is another example.

/Jorgen

--
// Jorgen Grahn <jgrahn@ Ph'nglui mglw'nafh Cthulhu
\X/ algonet.se> R'lyeh wgah'nagl fhtagn!

Jorgen Grahn

unread,
Sep 6, 2005, 1:45:53 PM9/6/05
to
On Tue, 06 Sep 2005 08:57:14 +0100, Michael Sparks <m...@cerenity.org> wrote:
...

> Are you so sure? I suspect this is due to you being used to writing code
> that is designed for a single CPU system. What if you're basic model of
> system creation changed to include system composition as well as
> function calls? Then each part of the system you compose can potentially
> run on a different CPU. Take the following for example:
...

> It probably looks strange, but it's really just a logical extension of the
> Unix command line's pipelines to allow multiple pipelines. Similarly, from
> a unix command line perspective, the following will automatically take
> advantage of all the CPU's I have available:
>
> (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort
>
> And a) most unix sys admins I know find that easy (probably the above
> laughable) b) given a multiprocessor system will probably try to maximise
> pipelining c) I see no reason why sys admins should be the only people
> writing programs who use concurrency without thinking about it :-)

Nitpick: not all Unix users are sysadmins ;-) Some Unix sysadmins actually
have real users, and the clued users use the same tools. I used the 'make
-j3' example elsewhere in the thread (I hadn't read this posting when I
responded there).

It seems to me that there must be a flaw in your arguments, but I can't seem
to find it ;-)

Maybe it's hard in real life to find two independent tasks A and B that can
be parallellized with just a unidirectional pipe between them? Because as
soon as you have to do the whole threading/locking/communication circus, it
gets tricky and the bugs (and performance problems) show up fast.

But it's interesting that the Unix pipeline Just Works (TM) with so little
effort.

Thomas Bellman

unread,
Sep 6, 2005, 3:35:49 PM9/6/05
to
Michael Sparks <m...@cerenity.org> writes:

> Similarly, from
> a unix command line perspective, the following will automatically take
> advantage of all the CPU's I have available:

> (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort

No, it won't. At the most, it will use four CPU:s for user code.

Even so, the vast majority of CPU time in the above pipeline will
be spent in 'md5sum', but those calls will be run in series, not
in parallell. The very small CPU bursts used by 'find' and 'cut'
are negligable in comparison, and would likely fit within the
slots when 'md5sum' is waiting for I/O even on a single-CPU
system.

And I'm fairly certain that 'sort' won't start spending CPU time
until it has collected all its input, so you won't gain much
there either.


--
Thomas Bellman, Lysator Computer Club, Linköping University, Sweden
"We don't understand the software, and ! bellman @ lysator.liu.se
sometimes we don't understand the hardware, !
but we can *see* the blinking lights!" ! Make Love -- Nicht Wahr!

Paul Rubin

unread,
Sep 6, 2005, 5:08:03 PM9/6/05
to
Jorgen Grahn <jgrah...@algonet.se> writes:
> I feel the recent SMP hype (in general, and in Python) is a red herring. Why
> do I need that extra performance? What application would use it?

How many mhz does the computer you're using right now have? When did
you buy it? Did you buy it to replace a slower one? If yes, you must
have wanted more performance. Just about everyone wants more
performance. That's why mhz keeps going up and people keep buying
faster and faster cpu's.

CPU makers seem to be running out of ways to increase mhz. Their next
avenue to increasing performance is SMP, so they're going to do that
and people are going to buy those. Just like other languages, Python
makes perfectly good use of increasing mhz, so it keeps up with them.
If the other languages also make good use of SMP and Python doesn't,
Python will fall back into obscurity.

> Am I prepared to pay the price (in bugs, lack of features, money,
> etc) for someone to implement this? There's already a lot of
> performance lost in bloatware people use everyday; why are we not
> paying the much lower price for having that fixed with traditional
> code optimization?

That is needed too. But obviously increased hardware speed has a lot
going for it. That's why people keep buying faster computers.

Paul Rubin

unread,
Sep 6, 2005, 5:09:20 PM9/6/05
to
Thomas Bellman <bel...@lysator.liu.se> writes:
> And I'm fairly certain that 'sort' won't start spending CPU time
> until it has collected all its input, so you won't gain much
> there either.

For large input, sort uses the obvious in-memory sort, external merge
algorithm, so it starts using cpu once there's enough input to fill
the memory buffer.

Bengt Richter

unread,
Sep 6, 2005, 7:26:17 PM9/6/05
to
On Tue, 6 Sep 2005 19:35:49 +0000 (UTC), Thomas Bellman <bel...@lysator.liu.se> wrote:

>Michael Sparks <m...@cerenity.org> writes:
>
>> Similarly, from
>> a unix command line perspective, the following will automatically take
>> advantage of all the CPU's I have available:
>
>> (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort
>
>No, it won't. At the most, it will use four CPU:s for user code.
>
>Even so, the vast majority of CPU time in the above pipeline will
>be spent in 'md5sum', but those calls will be run in series, not
>in parallell. The very small CPU bursts used by 'find' and 'cut'
>are negligable in comparison, and would likely fit within the
>slots when 'md5sum' is waiting for I/O even on a single-CPU
>system.
>
>And I'm fairly certain that 'sort' won't start spending CPU time
>until it has collected all its input, so you won't gain much
>there either.
>

Why wouldn't a large sequence sort be internally broken down into parallel
sub-sequence sorts and merges that separate processors can work on?

Regards,
Bengt Richter

Paul Rubin

unread,
Sep 6, 2005, 7:28:02 PM9/6/05
to
bo...@oz.net (Bengt Richter) writes:
> >And I'm fairly certain that 'sort' won't start spending CPU time
> >until it has collected all its input, so you won't gain much
> >there either.
> >
> Why wouldn't a large sequence sort be internally broken down into parallel
> sub-sequence sorts and merges that separate processors can work on?

Usually the input would be split into runs that would get sorted in
memory. Conventional wisdom says that the most important factor in
speeding up those sorts is making the runs as long as possible.
Depending on how complicated comparing two elements is, it's not clear
whether increased cache pressure from parallel processors hitting
different regions of memory would slow down the sort more than
parallelism would speed it up. Certainly any sorting utility that
tried to use parallel processors should use algorithms carefully
chosen and tuned around such issues.

Mike Meyer

unread,
Sep 6, 2005, 9:28:21 PM9/6/05
to
Jorgen Grahn <jgrah...@algonet.se> writes:
> But it's interesting that the Unix pipeline Just Works (TM) with so little
> effort.

Yes it is. That's a result of two things:

1) The people who invented pipes were *very* smart (but not smart
enough to invent stderr at the same time :-).

2) Pipes use a dead simple concurrency model. It isn't even as
powerful as CSP. No shared memory. No synchronization primitives. Data
flows in one direction, and one direction only.

Basically, each element of a pipe can be programmed ignoring
concurrency. It doesn't get much simpler than that.

<mike
--
Mike Meyer <m...@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.

Mike Meyer

unread,
Sep 6, 2005, 9:32:36 PM9/6/05
to
Jeremy Jones <zane...@bellsouth.net> writes:
> 1) find a good clean way to utilize muti-CPU machines and

I like SCOOP. But I'm still looking for alternatives.

> 2) come up with a simple, consistent, Pythonic concurrency paradigm.

That's the hard part. SCOOP attaches attributes to *variables*. It
also changes the semantics of function calls based on the values of
those attributes. Part of the power of using SCOOP comes from the
processor detecting when a variable has been declared as having an
attribute is used to reference objects for which the attribute doesn't
apply.

I'm not sure how Pythonic that can be made.

Robin Becker

unread,
Sep 7, 2005, 8:06:22 AM9/7/05
to pytho...@python.org
Paul Rubin wrote:
> Jeremy Jones <zane...@bellsouth.net> writes:
>
>>to pass data around between processes. Or an idea I've been tinkering
>>with lately is to use a BSD DB between processes as a queue just like
>>Queue.Queue in the standard library does between threads. Or you
>>could use Pyro between processes. Or CORBA.
>
>
> I think that doesn't count as using a the multiple processors; it's
> just multiple programs that could be on separate boxes.
> Multiprocessing means shared memory.
>
> This module might be of interest: http://poshmodule.sf.net
>
It seems it might be a bit out of date. I've emailed the author via sf, but no
reply. Does anyone know if poshmodule works with latest stuff?
--
Robin Becker

Michael Sparks

unread,
Sep 7, 2005, 2:39:49 PM9/7/05
to
Jorgen Grahn wrote:

> On Tue, 06 Sep 2005 08:57:14 +0100, Michael Sparks <m...@cerenity.org>
> wrote: ...
>> Are you so sure? I suspect this is due to you being used to writing code
>> that is designed for a single CPU system. What if you're basic model of
>> system creation changed to include system composition as well as
>> function calls? Then each part of the system you compose can potentially
>> run on a different CPU. Take the following for example:
> ...
>> It probably looks strange, but it's really just a logical extension of
>> the Unix command line's pipelines to allow multiple pipelines. Similarly,
>> from a unix command line perspective, the following will automatically
>> take advantage of all the CPU's I have available:
>>
>> (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort
>>
>> And a) most unix sys admins I know find that easy (probably the above
>> laughable) b) given a multiprocessor system will probably try to maximise
>> pipelining c) I see no reason why sys admins should be the only people
>> writing programs who use concurrency without thinking about it :-)
>
> Nitpick: not all Unix users are sysadmins ;-) Some Unix sysadmins actually
> have real users, and the clued users use the same tools. I used the 'make
> -j3' example elsewhere in the thread (I hadn't read this posting when I
> responded there).

I simply picked a group that do this often :-) The example pipeline I gave
above is I admit a particularly dire one. Things like the following are far
more silly:

# rm file; fortune | tee file | wc | cat - file
3 16 110
Bubble Memory, n.:
A derogatory term, usually referring to a person's
intelligence. See also "vacuum tube".

And

# (rm file; (while [ ! -s file ]; do echo >/dev/null; done; cat file |wc) & fortune | tee file) 2>/dev/null
Yea, though I walk through the valley of the shadow of APL, I shall
fear no evil, for I can string six primitive monadic and dyadic
operators together.
-- Steve Higgins
# 4 31 171

> It seems to me that there must be a flaw in your arguments, but I can't
> seem to find it ;-)

Sorry, but that's probably the funniest thing I've read all day :-)

Best Regards,


Michael.

Michael Sparks

unread,
Sep 7, 2005, 3:55:02 PM9/7/05
to
Thomas Bellman wrote:
> Michael Sparks <m...@cerenity.org> writes:
>> Similarly, from
>> a unix command line perspective, the following will automatically take
>> advantage of all the CPU's I have available:
>> (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort
>
> No, it won't. At the most, it will use four CPU:s for user code.

OK, maybe I should've been more precise. That said, the largest machine I
could potentially get access relatively easily to would be a quad CPU
machine so if I wanted to be be pedantic, regarding "*I* have available"
the idea stands. (Note I didn't say take /best/ advantage - that would
require rewriting all the indvidual parts of the pipeline above to be
structured in a similar manner or some other parallel approach)

You've essentially re-iterated my point though - that it naturally sorts
itself out, and does the best fit it can, which is better than none
(despite this being a naff example - as I mentioned). Worst case, yes,
everything serialises itself.


Michael.


Nick Craig-Wood

unread,
Sep 8, 2005, 5:29:47 AM9/8/05
to
Paul Rubin <http> wrote:
> Jorgen Grahn <jgrah...@algonet.se> writes:
> > I feel the recent SMP hype (in general, and in Python) is a red herring. Why
> > do I need that extra performance? What application would use it?
>
> How many mhz does the computer you're using right now have? When did
> you buy it? Did you buy it to replace a slower one? If yes, you must
> have wanted more performance. Just about everyone wants more
> performance. That's why mhz keeps going up and people keep buying
> faster and faster cpu's.
>
> CPU makers seem to be running out of ways to increase mhz. Their next
> avenue to increasing performance is SMP, so they're going to do that
> and people are going to buy those. Just like other languages, Python
> makes perfectly good use of increasing mhz, so it keeps up with them.
> If the other languages also make good use of SMP and Python doesn't,
> Python will fall back into obscurity.

Just to back your point up, here is a snippet from theregister about
Sun's new server chip. (This is a rumour piece but theregister
usually gets it right!)

Sun has positioned Niagara-based systems as low-end to midrange
Xeon server killers. This may sound like a familiar pitch - Sun
used it with the much delayed UltraSPARC IIIi processor. This time
around though Sun seems closer to delivering on its promises by
shipping an 8 core/32 thread chip. It's the most radical multicore
design to date from a mainstream server processor manufacturer and
arrives more or less on time.

It goes on later to say "The physical processor has 8 cores and 32
virtual processors" and runs at 1080 MHz.

So fewer GHz but more CPUs is the future according to Sun.

http://www.theregister.co.uk/2005/09/07/sun_niagara_details/

--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick

Jorgen Grahn

unread,
Sep 8, 2005, 5:30:56 PM9/8/05
to
On 06 Sep 2005 14:08:03 -0700, Paul Rubin <http> wrote:
> Jorgen Grahn <jgrah...@algonet.se> writes:
>> I feel the recent SMP hype (in general, and in Python) is a red herring. Why
>> do I need that extra performance? What application would use it?
>
> How many mhz does the computer you're using right now have? When did
> you buy it?

I'm not a good example -- my fastest computer is a Mac Mini. Come to think
of it, last time I /really/ upgraded for CPU speed was when I bought my
Amiga 4000/030 in 1994 ;-)

My 200MHz Pentium feels a bit slow for some tasks, but most of the time I
cannot really tell the difference, and its lack of RAM and disk space is
much more limiting.

> Did you buy it to replace a slower one? If yes, you must
> have wanted more performance. Just about everyone wants more
> performance. That's why mhz keeps going up and people keep buying
> faster and faster cpu's.

I'm not sure that is true, for most people. People keep buying faster CPUs
because the slower ones become unavailable! How this works from an
economical and psychological point of view, I don't know.

> CPU makers seem to be running out of ways to increase mhz. Their next
> avenue to increasing performance is SMP, so they're going to do that
> and people are going to buy those. Just like other languages, Python
> makes perfectly good use of increasing mhz, so it keeps up with them.
> If the other languages also make good use of SMP and Python doesn't,
> Python will fall back into obscurity.

I don't believe that will ever happen. Either of them.

My CPU spends almost all its time running code written in C. That code has
been written over the last thirty years under the assumption that if there
is SMP, it will be taken advantage of on the process level. I cannot imagine
anyone sitting down and rewriting all that code to take advantage of
concurreny.

Thus, I don't believe that SMP and SMP-like technologies will improve
performance for ordinary people. Or, if it will, it's because different
processes will run concurrently, not because applications become concurrent.
Except for some applications like image processing programs, which noone
would dream of implementing in Python anyway.

New, radical and exciting things don't happen in computing very often.

Robin Becker

unread,
Sep 9, 2005, 5:45:22 AM9/9/05
to pytho...@python.org
Robin Becker wrote:
> Paul Rubin wrote:
>

>>
>>This module might be of interest: http://poshmodule.sf.net
>>
>
> It seems it might be a bit out of date. I've emailed the author via sf, but no
> reply. Does anyone know if poshmodule works with latest stuff?

haven't been able to contact posh's author, but this blog entry

http://blog.amber.org/2004/12/10/posh-power/

seems to suggest that posh might not be terribly useful in its current state

"Updated: I spoke with one of the authors, Steffen Viken Valvåg, and his comment
was that Posh only went through proof of concept, and never further, so it has a
lot of issues. That certainly clarifies the problems I’ve had with it. For now,
I’m going to put it on the back burner, and come back, and perhaps update it
myself."

--
Robin Becker

Robin Becker

unread,
Sep 12, 2005, 9:54:33 AM9/12/05
to pytho...@python.org
Robin Becker wrote:
> Robin Becker wrote:
>
>>Paul Rubin wrote:
>>
>
>
>>>This module might be of interest: http://poshmodule.sf.net
>>>
>>
>>It seems it might be a bit out of date. I've emailed the author via sf, but no
>>reply. Does anyone know if poshmodule works with latest stuff?
>
from the horse's mouth comes confirmation that POSH is in a Norwegian Blue condition

from stef...@cs.uit.no

> Hi,
>
> Sorry for the late reply; You're right in that the project is close to dead.
> That's simply because I haven't had the time and motivation to maintain it,
> or rather, to improve it to the point where it's more usable and bug-free.
> Currently, I view POSH as a proof of concept that transparent access to
> Python objects allocated in shared memory is feasible. The main limitation
> of POSH is that it relies on the semantics of fork() to preserve the exact
> same memory layout in parent and child processes; porting POSH to Windows or
> other platforms without a fork() system call would be a challenge. Besides
> that, the implementation could use a more efficient memory allocator, and
> some features that sacrifice performance for flexibility probably need to be
> rethought. It also badly needs a shared-memory blocking queue
> implementation to support common programming models involving queues, and a
> portable lock implementation for more architectures. These latter
> improvements should be rather straightforward, but I don't know how to avoid
> the fundamental limitation of relying on fork().
>
> Feel free to repost these comments on comp.lang.python if you want.
>
> Cheers,
> Steffen


--
Robin Becker

0 new messages