Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Will multithreading make python less popular?

14 views
Skip to first unread message

rush...@gmail.com

unread,
Feb 16, 2009, 4:34:34 AM2/16/09
to
Hi everybody,
I am an engineer. I am trying to improve my software development
abilities. I have started programming with ruby. I like it very much
but i want to add something more. According to my previous research i
have designed a learning path for myself. It's like something below.
1. Ruby (Mastering as much as possible)
2. Python (Mastering as much as possible)
3. Basic C++ or Basic Java
And the story begins here. As i search on the net, I have found that
because of the natural characteristics of python such as GIL, we are
not able to write multi threaded programs. Oooops, in a kind of time
with lots of cpu cores and we are not able to write multi threaded
programs. That is out of fashion. How a such powerful language doesn't
support multi threading. That is a big minus for python. But there is
something interesting, something like multi processing. But is it a
real alternative for multi threading. As i searched it is not, it
requires heavy hardware requirements (lots of memory, lots of cpu
power). Also it is not easy to implement, too much extra code...

After all of that, i start to think about omiting python from my
carrier path and directly choosing c++ or java. But i know google or
youtube uses python very much. How can they choose a language which
will be killed by multi threading a time in near future. I like python
and its syntax, its flexibility.

What do you think about multi threading and its effect on python. Why
does python have such a break and what is the fix. Is it worth to make
investment of time and money to a language it can not take advantage
of multi cores?

Thank you...
Rushen

Andreas Kaiser

unread,
Feb 16, 2009, 5:06:52 AM2/16/09
to
On 16 Feb., 10:34, rushen...@gmail.com wrote:
> Hi everybody,
> I am an engineer. I am trying to improve my software development
> abilities. I have started programming with ruby. I like it very much
> but i want to add something more. According to my previous research i
> have designed a learning path for myself. It's like something below.
>       1. Ruby (Mastering as much as possible)
>       2. Python (Mastering as much as possible)
>       3. Basic C++ or Basic Java
> And the story begins here. As i search on the net,  I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs. Oooops, in a kind of time
> with lots of cpu cores and we are not able to write multi threaded
> programs. That is out of fashion. How a such powerful language doesn't
> support multi threading. That is a big minus for python.

On comp.lang.ruby David Masover wrote this at 29 Jul. 2008, 07:55:
-----
Right now, Ruby shares a problem with Python called the GIL -- the
Global (or
Giant) Interpreter Lock. What this means is that only one Ruby
instruction
may execute at a time. So even though they're using separate OS
threads, and
even though different Ruby threads might run on different cores, the
speed of
your program (at least the Ruby part) is limited to the speed of a
single
core.
-----

Please don't mix threads and parallel processing on more then one CPU
core.

Andreas

Aleksa Todorovic

unread,
Feb 16, 2009, 5:10:46 AM2/16/09
to rush...@gmail.com, pytho...@python.org
Hi, Rushen!

I'm also new to using Python but from what I've found, GIL is very
intentional decision. It is one of the features of Python which make
it so powerful. I believe that if it didn't have GIL, Python wouldn't
be half near where it is now (regarding it as a language, community,
platform support, popularity, ...).

The most important question is do you really need multi-threading for
what you do? There is lot of software which doesn't require mt at all,
or could be written without mt. Also, if you haven't learnt C++ or
Java yet, mt is not something you should be worried about in the near
future - there are lot of other, more important things, you need to
learn before opening door mt hell :-)

Best,
Aleksa

> --
> http://mail.python.org/mailman/listinfo/python-list
>

--
Aleksa Todorovic - Lead Programmer
Eipix Entertainment
http://www.eipix.com/

Michele Simionato

unread,
Feb 16, 2009, 5:27:11 AM2/16/09
to
On Feb 16, 10:34 am, rushen...@gmail.com wrote:
> Hi everybody,
> I am an engineer. I am trying to improve my software development
> abilities. I have started programming with ruby. I like it very much
> but i want to add something more. According to my previous research i
> have designed a learning path for myself. It's like something below.
>       1. Ruby (Mastering as much as possible)
>       2. Python (Mastering as much as possible)
>       3. Basic C++ or Basic Java
> And the story begins here. As i search on the net,  I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs. Oooops, in a kind of time
> with lots of cpu cores and we are not able to write multi threaded
> programs. That is out of fashion. How a such powerful language doesn't
> support multi threading. That is a big minus for python. But there is
> something interesting, something like multi processing. But is it a
> real alternative for multi threading. As i searched it is not, it
> requires heavy hardware requirements (lots of memory, lots of cpu
> power). Also it is not easy to implement, too much extra code...

multiprocessing is already implemented for you in the standard
library.
Of course it does not require heavy hardware requirements.

> After all of that, i start to think about omiting python from my
> carrier path and directly choosing c++ or java. But i know google or
> youtube uses python very much. How can they choose a language which
> will be killed by multi threading a time in near future. I like python
> and its syntax, its flexibility.
>
> What do you think about multi threading and its effect on python. Why
> does python have such a break and what is the fix. Is it worth to make
> investment of time and money to a language it can not take advantage
> of multi cores?

You can take advantage of multi cores, just not with threads but with
processes,
which BTW is the right way to go in most situations. So (assuming you
are not
a troll) you are just mistaken in thinking that the only way to
use multicores is via multithreading.

Michele Simionato

Tim Rowe

unread,
Feb 16, 2009, 6:07:49 AM2/16/09
to pytho...@python.org
2009/2/16 <rush...@gmail.com>:

> Hi everybody,
> I am an engineer. I am trying to improve my software development
> abilities. I have started programming with ruby. I like it very much
> but i want to add something more. According to my previous research i
> have designed a learning path for myself. It's like something below.
> 1. Ruby (Mastering as much as possible)
> 2. Python (Mastering as much as possible)
> 3. Basic C++ or Basic Java
> And the story begins here. As i search on the net, I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs. Oooops, in a kind of time
> with lots of cpu cores and we are not able to write multi threaded
> programs. That is out of fashion. How a such powerful language doesn't
> support multi threading. That is a big minus for python.

In a way, you've answered your own question. Why are you learning
three languages? Perhaps you've already realised that being
Turing-complete isn't the last word in language choice; that different
languages make different compromises, so the best language for one
task may not be the best language for a different task. Ok, Python
doesn't cope well with threading. It doesn't cope well with hard
real-time, either. So what? A deep saucepan isn't much use for making
an omlette (it can be done, but it's inefficient), but it's ideal for
making soup. Just because multiple cores are available doesn't mean
they will help your program significantly. Most real-world programs
work just fine in a single core, and it usually just isn't worth all
of the extra design effort, coding effort and debugging effort of
threading to cut the user response time down from a tenth of a second
to a twentieth. For *most* applications the single-thread Python
programmer will have something written, shipped and doing the job
whilst the multi-thread programmer is still trying to debug an
intermittent livelock that goes away whenever instrumentation is
added. For those few cases where threading is a genuine advantage,
Python is not ideal. But in the real world I doubt they're enough to
make a significant dent in Python's popularity.

--
Tim Rowe

rush...@gmail.com

unread,
Feb 16, 2009, 7:24:27 AM2/16/09
to
Hi again,

Dear Andreas

I know about GIL in ruby interpreter, they are trying to solve
problems because of GIL but it is not so important for me because i
like ruby because of its esthetic and it helps me to grasp some
programming concepts. As i know it is not so powerful language like
java. (Powerful language : rich libraries, wide community, very
variety of usage). why i want to learn python because it has syntax
like ruby but it is also powerful language, not of course as much as
java or c++

And also, i am trying to find an answer to the question mark in my
head. Is there a way to use multi cores in python as much effective
and easier as multi threading.

Dear Aleksa

Of course i have a long way to ago even if i am too old for these
things. However, i think that being able to use both cores or more is
a very big plus for execution time, speed like these. I believe many
programs will be rewritten to be able to use multiple cores.

Dear Michele

As i know every process has own hardware sources like memory and cpu
so i think it requires more source than multi threading.

Dear Tim

I want to learn python + c++ or java because of the desire of having
python's felxibility and easiness and c++ or java's stability and
speed and power together.

Thank you
Rushen

Tim Rowe

unread,
Feb 16, 2009, 7:50:42 AM2/16/09
to pytho...@python.org
2009/2/16 <rush...@gmail.com>:

> I want to learn python + c++ or java because of the desire of having
> python's felxibility and easiness and c++ or java's stability and
> speed and power together.

Yes, that's what I mean by different tradeoffs. Python is much easier
to program in than C++ or Java (in my experience, at least), but C++
and Java scale better and at least have the potential to be faster.
I'm not convinced that library support is significantly better for C++
or Java -- Python's libraries seem pretty rich to me. And that extra
speed might not be needed as often as you think. My postgrad
dissertation involved heavy number-crunching on large data sets, and
in my proposal I said I'd switch from Python to C++ when Python got
too slow. In fact, Python never did get too slow (I didn't even have
to switch to numpy), and plugging together ad-hoc modules, defined in
an XML script, was a dream in Python when I'd probably still be coding
it today in C++. Horses for courses. It's almost always wrong to say
that language A is better than language B; the most you can say is
that language A is better than language B for some specific task.


--
Tim Rowe

andrew cooke

unread,
Feb 16, 2009, 8:17:06 AM2/16/09
to pytho...@python.org
rush...@gmail.com wrote:
> Hi everybody,
> I am an engineer. I am trying to improve my software development
> abilities. I have started programming with ruby. I like it very much
> but i want to add something more. According to my previous research i
> have designed a learning path for myself. It's like something below.
> 1. Ruby (Mastering as much as possible)
> 2. Python (Mastering as much as possible)
> 3. Basic C++ or Basic Java
> And the story begins here. As i search on the net, I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs. Oooops, in a kind of time
> with lots of cpu cores and we are not able to write multi threaded
> programs. That is out of fashion. How a such powerful language doesn't
> support multi threading. That is a big minus for python. But there is
> something interesting, something like multi processing. But is it a
> real alternative for multi threading. As i searched it is not, it
> requires heavy hardware requirements (lots of memory, lots of cpu
> power). Also it is not easy to implement, too much extra code...

I understand why you are asking this question - you want to learn, and
that is good - but as you say, you are a beginner, and you are focussing
on one thing that has caught your eye when there are many other issues
that are more important.

The GIL is an implementation detail. I suspect that it could be largely
removed if there was sufficient need. But that still wouldn't make Python
a good language for programming on multiple cores. That's not as big a
deal as you think, because we currently DON'T KNOW what would make a good
language for programming on multiple cores - it's an open topic in the
wider community.

It may be, for example, that the best approaches for concurrent
programming require a lot of automatic pre-processing that needs static
type information. Or that mutable state is simply too much of a problem
and pure functional programming is the only way forwards. Both of these
would be much more serious problems for Python than the GIL. On the other
hand, Python has inbuilt support for co-routines. Experience gained with
that might lead towards a more actors-like solution that fits naturally
within Python.

So my first point is that you are worrying about a problem that no-one yet
know how to solve, and worrying about a small and largely irrelevant part
of that.


Second, you are committing the common mistake of over-estimating the
importance of efficiency. Python is not a fast language; it never has
been. That does not stop it being extremely useful. This is largely
because when it needs to do "hard work" it delegates to C libraries. Why
can this not apply to concurrent programming too? Maybe the top level
logic will stay in a single thread, because that is easier to program, but
libraries will use multiple cores.

So my second point is that you are being too restrictive in considering
what a future solution might look like.


In conclusion, then, I strongly suggest you stop worrying so much about
things that you don't yet have the experience to see completely, and
instead get more experience under your belt. That sounds more harsh than
I really mean - obviously worrying about this kind of thing is part of
learning.

Incidentally, if you already know Ruby and want to improve your abilities
I am not sure learning Python is the best use of your time. The two
languages are very similar. You might be better taking a huge leap to
something like Haskell or OCaml. Or, if you want to get hands-on
experience of concurrency now, Erlang.

Andrew

> After all of that, i start to think about omiting python from my
> carrier path and directly choosing c++ or java. But i know google or
> youtube uses python very much. How can they choose a language which
> will be killed by multi threading a time in near future. I like python
> and its syntax, its flexibility.
>
> What do you think about multi threading and its effect on python. Why
> does python have such a break and what is the fix. Is it worth to make
> investment of time and money to a language it can not take advantage
> of multi cores?
>
> Thank you...
> Rushen

> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


andrew cooke

unread,
Feb 16, 2009, 8:21:47 AM2/16/09
to pytho...@python.org
andrew cooke wrote:
> something like Haskell or OCaml. Or, if you want to get hands-on
> experience of concurrency now, Erlang.

I think for once I said something useful there. I think you would
probably enjoy Erlang, and it would be very useful for understanding
concurrency. Also, Erlang is not as strange a language as Haskell or
OCaml. You will probably find it quite familiar. And there's an
excellent book (Joe Armstrong's "Programming Erlang").

At the risk of hastening Python's demise :o) I strongly encourage you to
learn Erlang instead of Python.

Andrew


rush...@gmail.com

unread,
Feb 16, 2009, 8:26:52 AM2/16/09
to
Dear Andrew,

I think reading "beating the averages" by paul graham before some
experience is not a very good decision.
:)

Thank you
Andrew

Aleksa Todorovic

unread,
Feb 16, 2009, 9:05:26 AM2/16/09
to rush...@gmail.com, pytho...@python.org
Or the other way around :-)

Little off-topic, but...

After several months of fighting with Java threads, dead locks, live
locks, race conditions, I've rewritten my game server synchronization
so that threads are executed in concurrent way (with only exceptions
being socket sending and recieving threads), and all synchronization
is implemented on top of Java blocking structures (little blasphemy:
the way Stackless does it). After that change, the game suddenly
started working like a charm!


On Mon, Feb 16, 2009 at 13:24, <rush...@gmail.com> wrote:
> Of course i have a long way to ago even if i am too old for these
> things. However, i think that being able to use both cores or more is
> a very big plus for execution time, speed like these. I believe many
> programs will be rewritten to be able to use multiple cores.

--

rush...@gmail.com

unread,
Feb 16, 2009, 9:18:35 AM2/16/09
to
Dear Aleksa,

As you mentioned, using multi cores makes programs more fast and more
popular. But what about stackless python? Does it interpret same set
of python libraries with Cpython or Does it have a special sub set?

Thank you
Rusen

Hendrik van Rooyen

unread,
Feb 16, 2009, 10:09:55 AM2/16/09
to pytho...@python.org
"andrew cooke" <andrew@a....org> wrote:


>The GIL is an implementation detail. I suspect that it could be largely
>removed if there was sufficient need. But that still wouldn't make Python
>a good language for programming on multiple cores. That's not as big a
>deal as you think, because we currently DON'T KNOW what would make a good
>language for programming on multiple cores - it's an open topic in the
>wider community.

Those who do not study history are doomed to repeat it.

Occam was the language that should have won the marketing
prize, but didn't.

- Hendrik


Christof Donat

unread,
Feb 16, 2009, 10:54:10 AM2/16/09
to
Hi,

> But there is
> something interesting, something like multi processing. But is it a
> real alternative for multi threading. As i searched it is not, it
> requires heavy hardware requirements (lots of memory, lots of cpu
> power).

Not necessarily.

For the memory, modern operating Systems can ustilize a copy on write
semantics in their Applications virtual memory space. That means that after
starting a new process with fork(), the new Process shares all physical
Memory Pages with the original one as long a none of them writes to wome
Memory. The first write is cought by the OS. Then the Page will be copied
and the writing process will in future use the new copy of that page while
the other one (or eventually many) stay with the original. Multiprocessing
needs more Memory than well optimized Multithreading, but it is not as bad
as it is often said.

For the CPU: There are two things that make the difference here.

One ist Context Switching. When the OS switches between two threads of the
same Process, it does not need to change the memory lookup table. When
switching between Processes that is always necessary. Switching from a
Thread of Process A to a Thread of Process B is just like switching from A
to B.

Using Threads just increases the probability that the OS can do a faster
Context Switch. So Threads are faster here, but it is not a too big issue as
well.

The Second Thing is Communication. With Threds you can simply use variables
to communicate between Threads. To Prevent conflicting access to those
Variables you will use e.g. mutexes. That can be done with Shared Memory as
well. Shared Memory is a bit more difficult to handle than simply using the
heap, but again the impact is not really so big.

Google uses Processes for the Tabs in Chrome, because that way they get
around many Memory Management Problems they would have with Threads or with
a singlethreaded reactor. Using Processes is not per se a bad Idea. You pay
a bit with Memory and CPU but in many situations you get a much simpler
programming model.

Christof


seb....@gmail.com

unread,
Feb 16, 2009, 11:25:41 AM2/16/09
to
hi there,

[snip]


> Google uses Processes for the Tabs in Chrome, because that way they get
> around many Memory Management Problems they would have with Threads or with
> a singlethreaded reactor. Using Processes is not per se a bad Idea. You pay
> a bit with Memory and CPU but in many situations you get a much simpler
> programming model.

note also that one can bet on things like KSM [1] to further improve
the memory situation.

For example, we are investigating its use in the large (read heavy)
application frameworks at CERN where the typical vmem footprint is
~2Gb.
Running KSM together with a multiprocessing-based event loop manager,
we
managed to overcommit ~300% of the memory (ie: run 3 instances of our
application on a 2Gb-per-core machine)

cheers,
sebastien

[1] http://lwn.net/Articles/306642

Scott David Daniels

unread,
Feb 16, 2009, 12:44:35 PM2/16/09
to
andrew cooke wrote:
<The best short to the "GIL issue" that I have read in a long time>

Thanks a lot for writing this, I'll be pointing to it from time to time.
Were I writing such a thing I'd focus too much on the how (issues I know
that non-GIL true concurrency faces), and not enough on the high level
view.

Bravo.

--Scott David Daniels
Scott....@Acm.Org

rush...@gmail.com

unread,
Feb 16, 2009, 4:20:11 PM2/16/09
to
Hi again

OpenERP and ERP5 was written in python as i know. I really wonder how
they do this without threads. I want to see a real time graph at the
same time while i am working on the same screen. What is the secret?

Thanks
Rushen

Aahz

unread,
Feb 16, 2009, 10:35:02 PM2/16/09
to
In article <mailman.52.12347978...@python.org>,

Hendrik van Rooyen <ma...@microcorp.co.za> wrote:
>
>Occam was the language that should have won the marketing prize, but
>didn't.

It wasn't simple enough.
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

Weinberg's Second Law: If builders built buildings the way programmers wrote
programs, then the first woodpecker that came along would destroy civilization.

Ben Finney

unread,
Feb 16, 2009, 10:42:17 PM2/16/09
to
aa...@pythoncraft.com (Aahz) writes:

> In article <mailman.52.12347978...@python.org>,
> Hendrik van Rooyen <ma...@microcorp.co.za> wrote:
> >Occam was the language that should have won the marketing prize,
> >but didn't.
>
> It wasn't simple enough.

*bdom-tsssh* <URL:http://en.wikipedia.org/wiki/Occam's_razor>

--
\ “I guess we were all guilty, in a way. We all shot him, we all |
`\ skinned him, and we all got a complimentary bumper sticker that |
_o__) said, ‘I helped skin Bob.’” —Jack Handey |
Ben Finney

Michele Simionato

unread,
Feb 16, 2009, 11:53:39 PM2/16/09
to

Here is an example of using multiprocessing (which is included
in Python 2.6 and easy_installable in older Python versions)
to print a spin bar while a computation is running:

import sys, time
import multiprocessing

DELAY = 0.1
DISPLAY = [ '|', '/', '-', '\\' ]

def spinner_func(before='', after=''):
write, flush = sys.stdout.write, sys.stdout.flush
pos = -1
while True:
pos = (pos + 1) % len(DISPLAY)
msg = before + DISPLAY[pos] + after
write(msg); flush()
write('\x08' * len(msg))
time.sleep(DELAY)

def long_computation():
# emulate a long computation
time.sleep(3)

if __name__ == '__main__':
spinner = multiprocessing.Process(
None, spinner_func, args=('Please wait ... ', ''))
spinner.start()
try:
long_computation()
print 'Computation done'
finally:
spinner.terminate()

Paddy

unread,
Feb 17, 2009, 1:27:15 AM2/17/09
to
On Feb 16, 9:34 am, rushen...@gmail.com wrote:
> Hi everybody,
> I am an engineer. I am trying to improve my software development
> abilities. I have started programming with ruby. I like it very much
> but i want to add something more. According to my previous research i
> have designed a learning path for myself. It's like something below.
>       1. Ruby (Mastering as much as possible)
>       2. Python (Mastering as much as possible)
>       3. Basic C++ or Basic Java
> And the story begins here. As i search on the net,  I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs.

You are likely to find a lot of 'tick-list' type of comparison data on
the web that either needs a lot of knowledge to interpret, or is
misleading/wrong or all three!

Their are several versions of Python out their such as Ironpython,
Stackless Python, Jython as well as CPython - the main Python release.
They have different threading capabilities, but compilers of feature
comparison tick-lists tend to just stick to what CPython can do.

As an aside; if you were thinking of using threading for performance
reasons, then its best to first think of improving your general
ability to explore different algorithms. A change to an algorithm
often has the most impact on the performance of code. A better single
threaded, single process algorithm can offer better performaance than
throwing threadds or multiple processes alone when using a poor
underlying algorithm.

I was just exploring different ways of solving a problem on my blog:
http://paddy3118.blogspot.com/2009/02/comparison-of-python-solutions-to.html
(But no parallel solutions were attempted).

Have fun programming!

- Paddy.

Hendrik van Rooyen

unread,
Feb 17, 2009, 4:27:54 AM2/17/09
to pytho...@python.org
"Ben Finney" <bignose+h...@benfinney.id.au> wrote:

aa...@pythoncraft.com (Aahz) writes:

> In article <mailman.52.12347978...@python.org>,
> Hendrik van Rooyen <ma...@microcorp.co.za> wrote:
> >Occam was the language that should have won the marketing prize,
> >but didn't.
>
> It wasn't simple enough.

If Aahz was trolling, then he got me. I know about William of Occam,
after whom the language was named, and his razor, but did not make
the association, and answered seriously.

If you play with razor blades, you get cut.
:-)

- Hendrik


Hendrik van Rooyen

unread,
Feb 17, 2009, 4:18:36 AM2/17/09
to pytho...@python.org
"Aahz" <aahz@py....aft.com> wrote:


> In article <mailman.52.12347978...@python.org>,


> Hendrik van Rooyen <ma....orp.co.za> wrote:
> >
> >Occam was the language that should have won the marketing prize, but
> >didn't.
>
> It wasn't simple enough.

I thought (at the time) that it was quite good at hiding some
horrible complexities of communication between different
processes on the same, and different processors.

All done by the compiler, automagically.

I think now that a hard look at the underlying techniques
used then could add something to the debate referred to
earlier - but there may be a barrier because the dataflow
or systolic array type programming model is not one
that is currently fashionable.

- Hendrik


andrew cooke

unread,
Feb 17, 2009, 6:34:39 AM2/17/09
to pytho...@python.org

why do you think that current work is ignorant of occam? occam itself was
based on hoare's "communicating sequential processes" which is a classic
of the field. the ideas behind occam are not unknown and it hasn't been
forgotten (there are many libraries based on synchronous message passing;
one for java called jcsp for example -
http://www.cs.kent.ac.uk/projects/ofa/jcsp/ ; the "rendezvous model"
(receiving tasks wait for messages) is used in ada).

but really it did very little to hide the complexities of parallel
computing - it's famous because it (and the transputer platform) was one
of the first languages to take parallelism "seriously", not because it
presented any kind of silver bullet (more generally, it was a pretty crude
language, more suited to small embedded applications than large projects -
it didn't even have dynamically sized arrays)

there's a comment here http://lambda-the-ultimate.org/node/2437 that shows
the limitations of occam: "I used Occam (the transputer implementation of
CSP) very heavily in the 1980s and early 1990s, and eventually started
referring to channels as "the return of the GOTO", since in any moderately
complex application, you spent a lot of time wondering, "If I put bytes in
*here*, who will they go to?" Addressable actors and/or tuple spaces both
felt much more scalable (in the coding sense)."

(disclaimer - i haven't used it personally. once i was asked to maintain
an occam system, but somehow managed to dodge the responsibility)

if you look at erlang, which is one of the more successful parallel
languages at the moment, you'll see some similarity to occam (message
passing is explicit), but shifting to asynchronous messages helps give a
more robust system.

andrew


Hendrik van Rooyen wrote:


> "Aahz" <aahz@py....aft.com> wrote:
>
>
>> In article <mailman.52.12347978...@python.org>,

>> Hendrik van Rooyen <ma....orp.co.za> wrote:
>> >
>> >Occam was the language that should have won the marketing prize, but
>> >didn't.
>>
>> It wasn't simple enough.
>

> I thought (at the time) that it was quite good at hiding some
> horrible complexities of communication between different
> processes on the same, and different processors.
>
> All done by the compiler, automagically.
>
> I think now that a hard look at the underlying techniques
> used then could add something to the debate referred to
> earlier - but there may be a barrier because the dataflow
> or systolic array type programming model is not one
> that is currently fashionable.
>
> - Hendrik
>
>

> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


Bruno Desthuilliers

unread,
Feb 17, 2009, 6:38:43 AM2/17/09
to
rush...@gmail.com a écrit :
(snip)

> And the story begins here. As i search on the net, I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs.

I'm surprised no one here corrected that point yet, so here we go: yes,
Python does support multithreading. The fact that threads won't be
executed concurrently on a multicore machine (due to the GIL) is a
different problem (cf Andrew Cooke's answer on this - and his advice to
study Erlang if you want to do concurrent programming).

andrew cooke

unread,
Feb 17, 2009, 6:58:38 AM2/17/09
to pytho...@python.org

ah, sorry, i noticed this last night in another comment from rushenaly,
but forgot to say anything.

in case it's still not clear: you can have threads even on a single core.
this is done by "time slicing" - some cpu time is given to one thread,
then to another.

exactly who does the slicing can vary. in the case of a gui (which is
what i was about to explain last night then got distracted) it's quite
common for the gui library itself to do the scheduling of work. so even
though the gui library uses a single thread, it can update several
windows, handle user input, etc "in parallel". the next level is that the
language itself does the scheduling - that's what is commonly called
"threads". finally the operating system can share things out (and that's
called processes). but these are all basically the same thing, can happen
on a single core, and are not affected by the GIL (i have simplified
above; threads can also be a service that the operating system provides to
a language)

andrew


sturlamolden

unread,
Feb 17, 2009, 10:50:33 AM2/17/09
to
On 16 Feb, 10:34, rushen...@gmail.com wrote:

> And the story begins here. As i search on the net, I have found that
> because of the natural characteristics of python such as GIL, we are

> not able to write multi threaded programs. Oooops, in a kind of time

> with lots of cpu cores and we are not able to write multi threaded
> programs.

The GIL does not prevent multithreaded programs. If it did, why does
Python have a "threading" module?

The GIL prevents one use of threads: parallel processing in plain
Python. You can still do parallel processing using processes. Just
import "multiprocessing" instead of "threading". The two modules have
fairly similar APIs. You can still use threads to run tasks in the
background.

The GIL by the way, is an implementation detail. Nobody likes it very
much. But it is used for making it easier to extend Python with C
libraries (Python's raison d'etre). Not all C libraries are thread-
safe. The GIL is also used to synchronize access to reference counts.
In fact, Ruby is finally going to get a GIL as well. So it can't be
that bad.

As for parallel processing and multicore processors:

1. Even if a Python script can only exploit one core, we are always
running more than one process on the computer. For some reason this
obvious fact has to be repeated.

2. Parallel processing implies "need for speed". We have a 200x speed
penalty form Python alone. The "need for speed" are better served by
moving computational bottlenecks to C or Fortran. And in this case,
the GIL does not prevent us from doing parallel processing. The GIL
only affects the Python portion of the code.

3. Threads are not designed to be an abstraction for parallel
processing. For this they are awkward, tedious, and error prone.
Current threading APIs were designed for asynchronous tasks. Back in
the 1990s when multithreading became popular, SMPs were luxury only
few could afford, and multicore processors were unheard of.

4. The easy way to do parallel processing is not threads but OpenMP,
MPI, or HPF. Threads are used internally by OpenMP and HPF, but those
implementation details are taken care of by the compiler. Parallel
computers have been used by scientists and engineers for three decades
now, and threads have never been found a useful abstraction for manual
coding. Unfortunately, this knowledge has not been passed on from
physicists and engineers to the majority of computer programmers.
Today, there is a whole generation of misguided computer scientists
thinking that threads must be the way to use the new multicore
processors. Take a lesson from those actually experienced with
parallel computers and learn OpenMP!

5. If you still insist on parallel processing with Python threads --
ignoring what you can do with multiprocessing and native C/Fortran
extensions -- you can still do that as well. Just compile your script
with Cython or Pyrex and release the GIL manually. The drawback is
that you cannot touch any Python objects (only C objects) in your GIL-
free blocks. But after all, the GIL is used to synchronize reference
counting, so you would have to synchronize access to the Python
objects anyway.


import threading

def _threadproc():
with nogil:
# we do not hold the GIL here
pass
# now we have got the GIL back
return

def foobar():
t = threading.Thread(target=_threadproc)
t.start()
t.join()

That's it.

Sturla Molden

sturlamolden

unread,
Feb 17, 2009, 10:57:44 AM2/17/09
to
On 16 Feb, 15:18, rushen...@gmail.com wrote:

> As you mentioned, using multi cores makes programs more fast and more
> popular. But what about stackless python? Does it interpret same set
> of python libraries with Cpython or Does it have a special sub set?

Stackless and CPython have a GIL, Jython and IronPython do not.

S.M.

Christian Heimes

unread,
Feb 17, 2009, 11:55:46 AM2/17/09
to pytho...@python.org
rush...@gmail.com wrote:
> As you mentioned, using multi cores makes programs more fast and more
> popular. But what about stackless python? Does it interpret same set
> of python libraries with Cpython or Does it have a special sub set?

Your assumption is wrong. Multiple cores are able to speed up some kind
of programs. But they don't necessarily increase the speed of every
program.

Lot's of programs are IO bound (hard disk IO, network IO, memory IO).
Multiple CPU cores don't increase the speed of your hard disk! Lots of
algorithms are not designed for parallel computing. New algorithms must
be invented from the ground up.
In fact multiple CPUs can decrease the speed of a program because every
additional CPU increases the cost for house keeping and cache
invalidation. Please don't believe in marketing lies, neither 64bit nor
multiple CPUs magically increase the speed of your computer. It's going
to take at least half a decade until the tools for multi core
development are working properly and reliable.

Christian

Hyuga

unread,
Feb 17, 2009, 3:31:50 PM2/17/09
to
On Feb 16, 4:34 am, rushen...@gmail.com wrote:
> Hi everybody,
> I am an engineer. I am trying to improve my software development
> abilities. I have started programming with ruby. I like it very much
> but i want to add something more. According to my previous research i
> have designed a learning path for myself. It's like something below.
>       1. Ruby (Mastering as much as possible)
>       2. Python (Mastering as much as possible)
>       3. Basic C++ or Basic Java
> And the story begins here. As i search on the net,  I have found that
> because of the natural characteristics of python such as GIL, we are
> not able to write multi threaded programs. Oooops, in a kind of time
> with lots of cpu cores and we are not able to write multi threaded
> programs. That is out of fashion. How a such powerful language doesn't
> support multi threading. That is a big minus for python. But there is

> something interesting, something like multi processing. But is it a
> real alternative for multi threading. As i searched it is not, it
> requires heavy hardware requirements (lots of memory, lots of cpu
> power). Also it is not easy to implement, too much extra code...
>
> After all of that, i start to think about omiting python from my
> carrier path and directly choosing c++ or java. But i know google or
> youtube uses python very much. How can they choose a language which
> will be killed by multi threading a time in near future. I like python
> and its syntax, its flexibility.
>
> What do you think about multi threading and its effect on python. Why
> does python have such a break and what is the fix. Is it worth to make
> investment of time and money to a language it can not take advantage
> of multi cores?

Though I'm sure this has already been shot to death, I would just add
that maybe the better question would be: "Will Python make
multithreading less popular?" My answer would be something along the
lines of, that would be nice, but it just depends on how many people
adopt Python for their applications, realize they can't use threads to
take advantage of multiple CPUs, ask this same bloody question for the
thousandth time, and are told to use the multiprocessing module.

Graham Dumpleton

unread,
Feb 17, 2009, 10:47:14 PM2/17/09
to
On Feb 16, 9:27 pm, Michele Simionato <michele.simion...@gmail.com>
wrote:

> On Feb 16, 10:34 am, rushen...@gmail.com wrote:
>
>
>
> > Hi everybody,
> > I am an engineer. I am trying to improve my software development
> > abilities. I have started programming with ruby. I like it very much
> > but i want to add something more. According to my previous research i
> > have designed a learning path for myself. It's like something below.
> >       1. Ruby (Mastering as much as possible)
> >       2. Python (Mastering as much as possible)
> >       3. Basic C++ or Basic Java
> > And the story begins here. As i search on the net,  I have found that
> > because of the natural characteristics of python such as GIL, we are
> > not able to write multi threaded programs. Oooops, in a kind of time
> > with lots of cpu cores and we are not able to write multi threaded
> > programs. That is out of fashion. How a such powerful language doesn't
> > support multi threading. That is a big minus for python. But there is
> > something interesting, something like multi processing. But is it a
> > real alternative for multi threading. As i searched it is not, it
> > requires heavy hardware requirements (lots of memory, lots of cpu
> > power). Also it is not easy to implement, too much extra code...
>
> multiprocessing is already implemented for you in the standard
> library.
> Of course it does not require heavy hardware requirements.

>
> > After all of that, i start to think about omiting python from my
> > carrier path and directly choosing c++ or java. But i know google or
> > youtube uses python very much. How can they choose a language which
> > will be killed by multi threading a time in near future. I like python
> > and its syntax, its flexibility.
>
> > What do you think about multi threading and its effect on python. Why
> > does python have such a break and what is the fix. Is it worth to make
> > investment of time and money to a language it can not take advantage
> > of multi cores?
>
> You can take advantage of multi cores, just not with threads but with
> processes,
> which BTW is the right way to go in most situations. So (assuming you
> are not
> a troll) you are just mistaken in thinking that the only way to
> use multicores is via multithreading.

It is also a mistaken belief that you cannot take advantage of multi
cores with multiple threads inside of a single process using Python.

What no one seems to remember is that when calls are made into Python
extension modules implemented in C code, they have the option of
releasing the Python GIL. By releasing the Python GIL at that point,
it would allow other Python threads to run at the same time as
operations are being done in C code in the extension module.

Obviously if the extension module needs to manipulate Python objects
it will not be able to release the GIL, but not all extension modules
are going to be like this and could have quite sizable sections of
code that can run with the GIL released. Thus, you could have many
threads running at the same time in sections of C code, at same time
as well as currently delegated thread within Python code.

A very good example of this is when embeddeding Python inside of
Apache. So much stuff is actually done inside of Apache C code with
the GIL released, that there is more than ample opportunity for
multiple threads to be running across cores at the same time.

Graham

Michele Simionato

unread,
Feb 17, 2009, 11:33:46 PM2/17/09
to
On Feb 18, 4:47 am, Graham Dumpleton <Graham.Dumple...@gmail.com>
wrote:

> It is also a mistaken belief that you cannot take advantage of multi
> cores with multiple threads inside of a single process using Python.
>
> What no one seems to remember is that when calls are made into Python
> extension modules implemented in C code, they have the option of
> releasing the Python GIL. By releasing the Python GIL at that point,
> it would allow other Python threads to run at the same time as
> operations are being done in C code in the extension module.

You are perfectly right and no one forgot this point,
I am sure. However I think we were answering to the question
"can pure Python code take advantage of multiple CPUs
via multithreading" and the answer is no. Of course a
C extension can do that, but that is beside the point.
It is still worth noticing - as you did -
that some well known Python (+C extension) library -
such as mod_wsgi - is already well equipped to manage
multithreading on multiple cores without any further
effort from the user. This is a good point.

Michele Simionato

rush...@gmail.com

unread,
Feb 19, 2009, 6:36:13 AM2/19/09
to
Thank you for all your answers...

I think i am going to pick Java instead of Python...

Rushen

Steve Holden

unread,
Feb 19, 2009, 7:57:59 AM2/19/09
to pytho...@python.org
rush...@gmail.com wrote:
> Thank you for all your answers...
>
> I think i am going to pick Java instead of Python...
>
Well, good luck. See what a helpful bunch of people you meet in the
Python world? Glad you found all the advice helpful. Come back when you
want to try Python!

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/

rush...@gmail.com

unread,
Feb 19, 2009, 10:39:01 AM2/19/09
to
Thank you Steve,

I really wanted to learn python, but as i said i don't want to make a
dead investment. I hope someone can fix these design errors and maybe
can write an interpreter in python :)

Thank you so much great community...
Rushen

Tim Rowe

unread,
Feb 19, 2009, 10:59:36 AM2/19/09
to pytho...@python.org
2009/2/19 <rush...@gmail.com>:

> Thank you Steve,
>
> I really wanted to learn python, but as i said i don't want to make a
> dead investment. I hope someone can fix these design errors and maybe
> can write an interpreter in python :)

Good luck with Java, and with your search for a perfect language. I
think it will be a long search.

--
Tim Rowe

Christian Heimes

unread,
Feb 19, 2009, 11:08:32 AM2/19/09
to pytho...@python.org
rush...@gmail.com schrieb:

> Thank you Steve,
>
> I really wanted to learn python, but as i said i don't want to make a
> dead investment. I hope someone can fix these design errors and maybe
> can write an interpreter in python :)

Good luck with Java! You have just traded one "design flaw" for another,
more fatal design error. Please read this paper from a Berkely professor
for CS why people think that threads are evil and not the right solution
for concurrency on multi core systems.
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf

Quote:
[...] non-trivial multi-threaded programs are incomprehensible for humans.

HTH
Christian

rush...@gmail.com

unread,
Feb 19, 2009, 11:10:56 AM2/19/09
to
Thank you Tim...

It is not a search for perfect language. It is a search for a capable
language to modern worlds' needs.

Rushen

Tim Rowe

unread,
Feb 19, 2009, 11:48:52 AM2/19/09
to pytho...@python.org
2009/2/19 <rush...@gmail.com>:

> Thank you Tim...
>
> It is not a search for perfect language. It is a search for a capable
> language to modern worlds' needs.

That would be just about any of the ones you mentioned, then. Unless
you mean the needs of a specific project, in which case the
suitability will depend on the project.

--
Tim Rowe

sturlamolden

unread,
Feb 19, 2009, 11:55:00 AM2/19/09
to
On Feb 19, 4:39 pm, rushen...@gmail.com wrote:

> I really wanted to learn python, but as i said i don't want to make a
> dead investment. I hope someone can fix these design errors and maybe
> can write an interpreter in python :)

Java and Python has different strengths and weaknesses. There is no
such thing as "the perfect language". It all depends on what you want
to do.

Just be aware that scientists and engineers who need parallel
computers do not use Java or C#. A combination of a scripting language
with C or Fortran seems to be preferred. Popular scripting languages
numerical computing include Python, R, IDL, Perl, and Matlab.

You will find that in the Java community, threads are generally used
for other tasks than parallel computing, and mainly asynchronous I/O.
Java does not have a GIL, nor does Jython or Microsoft's IronPython.
But if you use threads for I/O, the GIL does not matter. Having more
than one CPU does not make your harddisk or network connection any
faster. The GIL does not matter before crunching numbers on the CPU
becomes the bottleneck. And when you finally get there, perhaps it is
time to look into some C programming? Those that complain about
CPython's GIL (or the GIL of Perl/PHP/Ruby for that matter) seem to be
developers who have no or little experience with parallel computers.
Yes, the GIL prevents Python threads from being used in a certain way.
But do you really need to use threads like that? Or do you just think
you do?

S.M.

Richard Brodie

unread,
Feb 19, 2009, 12:07:33 PM2/19/09
to

"sturlamolden" <sturla...@yahoo.no> wrote in message
news:d544d846-15ac-446e...@m40g2000yqh.googlegroups.com...

> The GIL does not matter before crunching numbers on the CPU
> becomes the bottleneck. And when you finally get there, perhaps it is
> time to look into some C programming?

Or numpy on a 512 core GPGPU processor, because using the CPU
for crunching numbers is just *so* dated. ;)


Paul Rubin

unread,
Feb 19, 2009, 3:18:04 PM2/19/09
to
sturlamolden <sturla...@yahoo.no> writes:
> Yes, the GIL prevents Python threads from being used in a certain way.
> But do you really need to use threads like that? Or do you just think
> you do?

How old is your computer, why did you buy it, and is it the first one
you ever owned?

For most of us, I suspect, it is not our first one, and we bought it
to get a processing speedup relative to the previous one. If such
speedups were useless or unimportant, we would not have blown our hard
earned cash replacing perfectly good older hardware, so we have to
accept the concept that speed matters and ignore those platitudes that
say otherwise.

It used to be that new computers were faster than the old ones because
they ran at higher clock rates. That was great, no software changes
at all were required to benefit from the higher speed. Now, they get
the additional speed by having more cores. That's better than nothing
but making use of it requires fixing the GIL.

Steve Holden

unread,
Feb 19, 2009, 3:33:59 PM2/19/09
to rush...@gmail.com, pytho...@python.org

By the way, since you have chosen Java you might be interested to know
that the JPython implementation (also open source) generates JVM
bytecode, and allows you to freely mix Java and Python classes.

There is no Global Interpreter Lock in JPython ...

Steve Holden

unread,
Feb 19, 2009, 3:33:59 PM2/19/09
to pytho...@python.org, pytho...@python.org

By the way, since you have chosen Java you might be interested to know


that the JPython implementation (also open source) generates JVM
bytecode, and allows you to freely mix Java and Python classes.

There is no Global Interpreter Lock in JPython ...

regards

Grant Edwards

unread,
Feb 19, 2009, 3:44:00 PM2/19/09
to
On 2009-02-19, Steve Holden <st...@holdenweb.com> wrote:

> By the way, since you have chosen Java you might be interested
> to know that the JPython implementation (also open source)
> generates JVM bytecode, and allows you to freely mix Java and
> Python classes.
>
> There is no Global Interpreter Lock in JPython ...

I think somebody should propose adding one.

It would be a nice change of pace from the standard
never-ending thread(s) on the GIL.

--
Grant Edwards grante Yow! I'm into SOFTWARE!
at
visi.com

Falcolas

unread,
Feb 19, 2009, 3:48:37 PM2/19/09
to
On Feb 19, 1:18 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> ...  If such

> speedups were useless or unimportant, we would not have blown our hard
> earned cash replacing perfectly good older hardware, so we have to
> accept the concept that speed matters and ignore those platitudes that
> say otherwise.

That's fair, but by using a high level language in the first place,
you've already made the conscious decision to sacrifice speed for ease
of programming. Otherwise, you would probably be programming in C.

The question really is "Is it fast enough", and the answer usually is
"Yes". And when the answer is "No", there are many things which can be
done before the need to multi-thread the whole script comes about.

It's a proposition that used to bother me, until I did some actual
programming of real world problems in Python. I've yet to really find
a case where the application was slow enough to justify the cost of
using multiple Python processes.

~G

Paul Rubin

unread,
Feb 19, 2009, 5:11:31 PM2/19/09
to
Falcolas <garr...@gmail.com> writes:
> That's fair, but by using a high level language in the first place,
> you've already made the conscious decision to sacrifice speed for ease
> of programming. Otherwise, you would probably be programming in C.

That Python is so much slower than C is yet another area where Python
can use improvement.

> It's a proposition that used to bother me, until I did some actual
> programming of real world problems in Python. I've yet to really find
> a case where the application was slow enough to justify the cost of
> using multiple Python processes.

Right, that's basically the issue here: the cost of using multiple
Python processes is unnecessarily high. If that cost were lower then
we could more easily use multiple cores to make oru apps faster.

rush...@gmail.com

unread,
Feb 19, 2009, 5:18:34 PM2/19/09
to
Hi again

I really want to imply that i am not in search of a perfect language.
Python for programming productivity is a great language but there are
some real world facts. Some people want a language that provides great
flexibility. A language can provide threads and processes and
programmer choose the way. I really believe that GIL is a design
error.

Thanks.

Rushen

Tim Wintle

unread,
Feb 19, 2009, 5:35:19 PM2/19/09
to Falcolas, pytho...@python.org
On Thu, 2009-02-19 at 12:48 -0800, Falcolas wrote:
> That's fair, but by using a high level language in the first place,
> you've already made the conscious decision to sacrifice speed for ease
> of programming. Otherwise, you would probably be programming in C.
My parents would have gone mad at me for saying that when I was young -
C is just about the highest-level language they ever used - Assembly/hex
all the way!

So if you really want speed then why don't you write your code in
assembly? That's the only "perfect language" - it's capable of doing
everything in the most efficient way possible on your machine.

Of course that's a hassle, so I guess you may as well use C, since
that's almost certainly only linearly worse than using assembly, and it
takes far less time to use.

Oh, but then you may as well use python, since (again) that's probably
only linearly worse than C, and well-programmed C at that - I certainly
wouldn't end up with some of the optimisations that have gone into the
python interpreter!

That's basically what my mind goes through whenever I choose a language
to use for a task - and why I almost always end up with Python.

> It's a proposition that used to bother me, until I did some actual
> programming of real world problems in Python. I've yet to really find
> a case where the application was slow enough to justify the cost of
> using multiple Python processes.

I deal with those situations a fair bit - but the solutions are normally
easy - if it's limited by waiting for IO then I use threads, if it's
limited by waiting for CPU time then I use multiple processes, or share
the job over another application (such as MySQL), or run a task over a
cluster.

If you have a task where the linear optimisation offered by multiple
cores is really important then you can either:
* Run it over multiple processes, or multiple machines in Python
or
* spend a year writing it in C or assembly, by which time you can buy a
new machine that will run it fine in Python.


Yes, we're coming to a point where we're going to have tens of cores in
a chip, but by that time someone far cleverer than me (possibly someone
who's on this list) will have solved that problem. The top experts in
many fields use Python, and if they weren't able to make use of multiple
core chips, then there wouldn't be any demand for them.

Tim Wintle


>
> ~G
> --
> http://mail.python.org/mailman/listinfo/python-list

Tim Rowe

unread,
Feb 19, 2009, 5:35:38 PM2/19/09
to pytho...@python.org
2009/2/19 Paul Rubin <http>:

> That Python is so much slower than C is yet another area where Python
> can use improvement.

No, because we don't use Python where C would be more appropriate.
Sure nobody would complain if Python were faster, but it's not for
speed that we choose Python. Not speed of /execution/ that is.
Different languages have different trade-offs. Python's trade-offs
suit us. If they don't suit you, use a language with trade-offs that
do.


--
Tim Rowe

Tim Rowe

unread,
Feb 19, 2009, 5:37:02 PM2/19/09
to pytho...@python.org
2009/2/19 <rush...@gmail.com>:

It's only an error if it gets in the way. It's the experience of a lot
of programmers that it doesn't, so it's not an error.


--
Tim Rowe

Steve Holden

unread,
Feb 19, 2009, 5:53:40 PM2/19/09
to pytho...@python.org
Tim Rowe wrote:
> 2009/2/19 <rush...@gmail.com>:
> It's only an error if it gets in the way. It's the experience of a lot
> of programmers that it doesn't, so it's not an error.
>
And it's not a feature of the language, rather of one or two
implementations. Neither JPython not IronPython use a GIL to the best of
my knowledge, so you are still quite at liberty to use them.

Tim Wintle

unread,
Feb 19, 2009, 6:16:32 PM2/19/09
to pytho...@python.org
On Thu, 2009-02-19 at 12:18 -0800, Paul Rubin wrote:
> If such
> speedups were useless or unimportant, we would not have blown our hard
> earned cash replacing perfectly good older hardware, so we have to
> accept the concept that speed matters and ignore those platitudes that
> say otherwise.

Kind of agree (although I use a netbook at lot at the moment, and I
don't use that because of speed-ups!)

> It used to be that new computers were faster than the old ones because
> they ran at higher clock rates. That was great, no software changes
> at all were required to benefit from the higher speed. Now, they get
> the additional speed by having more cores. That's better than nothing
> but making use of it requires fixing the GIL.

My opinion on this (when talking about personal computers rather than
servers) is that:

(1)
Computers *appear* faster now because they have more cores - you can
have one doing the fancy GUI effects of Compiz etc. in the background,
while the other core actually does the work.

(2)
Actual speedups aren't that related to either clock speed or cores at
the moment, they're related to the switch to 64-bit processors, the
massive increases in RAM and the increase in system bus speeds and other
IO (and of course graphics cards). I suspect that the next common
increase will be solid state hard disks.

e.g. I used to expect my computer to be paging all the time, although
I'd try to reduce it - these days I'm really upset when I see I've had
to page *anything* to disk!

Another massive increase (which I am willing to pay more for with the
work I do) is the processor cache - at first it was amazing when we got
significant level2 cache advertised on pc equipment, now I can fit
massive amounts of code into my 4mb level-2 cache *on my laptop*! That's
a huge impact for numerical work.

(3)
Multiple cores scale processing power linearly at best with the number
of cores (since you're normally going to be IO based at some point).
Perhaps the GIL will be relaxed a bit, but it's not going to have a
massive effect.

Tim Wintle


Mensanator

unread,
Feb 19, 2009, 6:19:51 PM2/19/09
to
On Feb 19, 2:18 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:

When I run I Python program, the Windows task manager shows both
cores running (usually a 60/40 split) for an overall 50% usage.

What am I actually seeing? If Python only uses one of the cores,
why do both light up? Is everything much more complicated (due to
OS scheduling, etc.) than the simple explanations of GIL?

Paul Rubin

unread,
Feb 19, 2009, 6:20:51 PM2/19/09
to
Tim Rowe <dig...@gmail.com> writes:
> > That Python is so much slower than C is yet another area where Python
> > can use improvement.
>
> No, because we don't use Python where C would be more appropriate.

C is basically never appropriate. C should be outlawed by Congress
with the ban enforced by roving pie-throwing squads <wink>. If a
Python program is too slow, making Python faster is far preferable to
switching to C.

> Sure nobody would complain if Python were faster, but it's not for
> speed that we choose Python. Not speed of /execution/ that is.

I would say, slow execution is a drawback that we put up with in order
to gain benefits of Python programming that are mostly unrelated to
the causes of the slowness. The slowness itself can be addressed by
technical means, such as native-code compilation and eliminating the
GIL. I believe (for example) that the PyPy project is doing both of
these.

> Different languages have different trade-offs. Python's trade-offs
> suit us. If they don't suit you, use a language with trade-offs that
> do.

I don't think Python's slowness is inherent in the language. It's
mostly a shortcoming of the implementation that should be fixed like
any other such shortcoming.

Falcolas

unread,
Feb 19, 2009, 6:23:40 PM2/19/09
to
On Feb 19, 3:11 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:

> Falcolas <garri...@gmail.com> writes:
> > It's a proposition that used to bother me, until I did some actual
> > programming of real world problems in Python. I've yet to really find
> > a case where the application was slow enough to justify the cost of
> > using multiple Python processes.
>
> Right, that's basically the issue here: the cost of using multiple
> Python processes is unnecessarily high.  If that cost were lower then
> we could more easily use multiple cores to make oru apps faster.

I was actually referring to the time cost of researching or developing
a parallel execution algorithm which would be suitable for multiple
processes.

The system overhead of using the Python multiprocess module is fairly
negligible for the systems I work on.

> Different languages have different trade-offs. Python's trade-offs
> suit us. If they don't suit you, use a language with trade-offs that
> do.

+1

~G

Paul Rubin

unread,
Feb 19, 2009, 6:29:40 PM2/19/09
to
Tim Rowe <dig...@gmail.com> writes:
> > I really believe that GIL is a design error.
> It's only an error if it gets in the way. It's the experience of a lot
> of programmers that it doesn't, so it's not an error.

It does get in the way of quite a few of us, but I wouldn't exactly
call it an error. It was a sensible decision at the time it was made.
Changing technology and changing requirements have turned it into a
problem since then.

A sensible decision later becoming a problem is a fairly normal
occurrence, not just in software but in just about every area of human
endeavor. The sensible response to the changed landscape is to
recognize the problem and figure out what to do about it. Denying the
problem's existence is not sensible.

Steve Holden

unread,
Feb 19, 2009, 6:34:28 PM2/19/09
to pytho...@python.org
Tim Wintle wrote:
> On Thu, 2009-02-19 at 12:18 -0800, Paul Rubin wrote:
>> If such
>> speedups were useless or unimportant, we would not have blown our hard
>> earned cash replacing perfectly good older hardware, so we have to
>> accept the concept that speed matters and ignore those platitudes that
>> say otherwise.
>
> Kind of agree (although I use a netbook at lot at the moment, and I
> don't use that because of speed-ups!)
[...]

> (3)
> Multiple cores scale processing power linearly at best with the number
> of cores (since you're normally going to be IO based at some point).
> Perhaps the GIL will be relaxed a bit, but it's not going to have a
> massive effect.
>
There are significant classes of problems which *are* compute bound, and
as computers are applied more and more to planning and design problems
it seems likely that kind of application will become more significant.

In twenty years time our laptops will probably be continually optimizing
aspects of our lives using advanced linear algebra algorithms over
matrices with hundreds or thousands of millions of elements. I wouldn't
like Python to be excluded from solving such problems, and others we
currently fail to foresee.

Though my own interest does tend to lie in the areas where interaction
of various kinds dominates the need for large chunks of processing
power, I can't ignore the obvious. Many compute-bound problems do exist,
and they are important.

Christian Heimes

unread,
Feb 19, 2009, 6:38:21 PM2/19/09
to pytho...@python.org
Mensanator wrote:
> When I run I Python program, the Windows task manager shows both
> cores running (usually a 60/40 split) for an overall 50% usage.
>
> What am I actually seeing? If Python only uses one of the cores,
> why do both light up? Is everything much more complicated (due to
> OS scheduling, etc.) than the simple explanations of GIL?

A Python *program* can utilize more than one core, just Python *code*
can't run on multiple cores in parallel. Everytime a C function calls
code that doesn't use the Python API it can chose to release the GIL.
That way a Python program can wrap a library and use as many cores as
possible.

Christian

Steve Holden

unread,
Feb 19, 2009, 6:45:30 PM2/19/09
to pytho...@python.org
Paul Rubin wrote:
> Tim Rowe <dig...@gmail.com> writes:
>>> That Python is so much slower than C is yet another area where Python
>>> can use improvement.
>> No, because we don't use Python where C would be more appropriate.
>
> C is basically never appropriate. C should be outlawed by Congress
> with the ban enforced by roving pie-throwing squads <wink>. If a
> Python program is too slow, making Python faster is far preferable to
> switching to C.
>
Unfortunately it's also far more difficult. I speak with the experience
of the "Need for Speed" sprint behind me: the accumulated brainpower at
that event should have been able to pick all the low-hanging fruit, and
yet the resultant net speedup was worthwhile, but definitely not immense.

>> Sure nobody would complain if Python were faster, but it's not for
>> speed that we choose Python. Not speed of /execution/ that is.
>
> I would say, slow execution is a drawback that we put up with in order
> to gain benefits of Python programming that are mostly unrelated to
> the causes of the slowness. The slowness itself can be addressed by
> technical means, such as native-code compilation and eliminating the
> GIL. I believe (for example) that the PyPy project is doing both of
> these.
>

And IronPython and JPython already have. (How many times do I have to
say this before somebody with access to decent multi-processor hardware
runs some actual benchmarks? Where's snakebite.org when you need it? ;-)

>> Different languages have different trade-offs. Python's trade-offs
>> suit us. If they don't suit you, use a language with trade-offs that
>> do.
>
> I don't think Python's slowness is inherent in the language. It's
> mostly a shortcoming of the implementation that should be fixed like
> any other such shortcoming.

Reasonable, and true. Some people talk about the GIL as though it were
something other than an implementation artifact.

What Guido doesn't seem to have accepted yet is that slowing [C]Python
down by 50% on a single-processor CPU will actually be a worthwhile
tradeoff in ten years time, when nothing will have less than eight cores
and the big boys will be running at 64 kilo-cores.

MRAB

unread,
Feb 19, 2009, 6:52:12 PM2/19/09
to pytho...@python.org
Tim Wintle wrote:
[snip]

> Yes, we're coming to a point where we're going to have tens of cores in
> a chip, but by that time someone far cleverer than me (possibly someone
> who's on this list) will have solved that problem. The top experts in
> many fields use Python, and if they weren't able to make use of multiple
> core chips, then there wouldn't be any demand for them.
>
Perhaps we should ask Guido who it is; after all, he's the one with the
time machine! :-)

Terry Reedy

unread,
Feb 19, 2009, 6:58:57 PM2/19/09
to pytho...@python.org
Paul Rubin wrote:

> I would say, slow execution is a drawback that we put up with in order
> to gain benefits of Python programming that are mostly unrelated to
> the causes of the slowness. The slowness itself can be addressed by
> technical means, such as native-code compilation and eliminating the
> GIL.

Given that the GIL remains to make Python run faster in the usual (up to
now, at least) case of 1 processor, that seems a strange statement.

Steve Holden

unread,
Feb 19, 2009, 7:13:57 PM2/19/09
to pytho...@python.org
Terry Reedy wrote:

> Paul Rubin wrote:
>
>> I would say, slow execution is a drawback that we put up with in order
>> to gain benefits of Python programming that are mostly unrelated to
>> the causes of the slowness. The slowness itself can be addressed by
>> technical means, such as native-code compilation and eliminating the
>> GIL.
>
> Given that the GIL remains to make Python run faster in the usual (up to
> now, at least) case of 1 processor, that seems a strange statement.
>
>From a historical perspective, that's going to seem like a very
parochial PoV in (say) twenty years.

Paul Rubin

unread,
Feb 19, 2009, 7:16:24 PM2/19/09
to
Terry Reedy <tjr...@udel.edu> writes:
> > The slowness itself can be addressed by technical means, such as
> > native-code compilation and eliminating the GIL.
>
> Given that the GIL remains to make Python run faster in the usual (up
> to now, at least) case of 1 processor, that seems a strange statement.

We've had this discussion before. The 1-processor slowdown that you
refer to comes from replacing the GIL with the blunt instrument of a
lock around each reference count operation. That has the advantage of
not breaking CPython in a million places, but has the drawback of
taking a big performance hit. The long term fix is to replace
reference counts with a tracing GC. That is apparently not feasible
in the framework of CPython and the many extension modules that have
been written for it, so it would have to be accompanied by an
implementation switch (e.g. PyPy).

Steve Holden has mentioned Jython and Ironpython a few times in this
thread. Those are reasonable proofs of the concept of a GIL-less
Python, but for various reasons (spelled J-V-M and W-i-n-d-o-w-s) are
not all that suitable for many current Python users.

Steve Holden

unread,
Feb 19, 2009, 7:15:22 PM2/19/09
to pytho...@python.org
Paul Rubin wrote:
> Tim Rowe <dig...@gmail.com> writes:
>>> I really believe that GIL is a design error.
>> It's only an error if it gets in the way. It's the experience of a lot
>> of programmers that it doesn't, so it's not an error.
>
> [...] Denying the

> problem's existence is not sensible.

And if wishes were horses then beggars would ride :-)

Steve Holden

unread,
Feb 19, 2009, 7:17:12 PM2/19/09
to pytho...@python.org

But he clearly hasn't been using it lately.

<heresy>Perhaps it's time Python stopped being a dictatorship?</heresy>

regards
Steve

Steve Holden

unread,
Feb 19, 2009, 8:18:32 PM2/19/09
to pytho...@python.org
Mensanator wrote:
> On Feb 19, 2:18 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
[...]

> When I run I Python program, the Windows task manager shows both
> cores running (usually a 60/40 split) for an overall 50% usage.
>
> What am I actually seeing? If Python only uses one of the cores,
> why do both light up? Is everything much more complicated (due to
> OS scheduling, etc.) than the simple explanations of GIL?

You are seeing your Python program running on one core, and the usual
Windows crap keeping the other one busy.

Mensanator

unread,
Feb 19, 2009, 8:57:52 PM2/19/09
to
On Feb 19, 7:18 pm, Steve Holden <st...@holdenweb.com> wrote:
> Mensanator wrote:
> > On Feb 19, 2:18 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> [...]
> > When I run I Python program, the Windows task manager shows both
> > cores running (usually a 60/40 split) for an overall 50% usage.
>
> > What am I actually seeing? If Python only uses one of the cores,
> > why do both light up? Is everything much more complicated (due to
> > OS scheduling, etc.) than the simple explanations of GIL?
>
> You are seeing your Python program running on one core, and the usual
> Windows crap keeping the other one busy.

I thought of that, but the usual Windows crap accounts for only a
couple percent prior to the Python program running. Christian Heimes
answer sounds more realistic.

But what do I know?

Christian Heimes

unread,
Feb 19, 2009, 9:19:15 PM2/19/09
to pytho...@python.org
Mensanator wrote:
> I thought of that, but the usual Windows crap accounts for only a
> couple percent prior to the Python program running. Christian Heimes
> answer sounds more realistic.
>
> But what do I know?

Be happy that your program makes use of both cores? :]

You can restrict your program from using more than one core by setting
the cpu affinity. On Windows the pywin32 packages wraps all necessary calls:


phandle = win32process.GetCurrentProcess()
win32process.SetProcessAffinityMask(phandle, mask)


Past week I've written a wrapper for the Linux syscalls
sched_setaffinity and friends. I may be able and allowed to release the
stuff in a few weeks.

Christian

Steve Holden

unread,
Feb 19, 2009, 9:36:07 PM2/19/09
to pytho...@python.org
Mensanator wrote:
> On Feb 19, 7:18 pm, Steve Holden <st...@holdenweb.com> wrote:
>> Mensanator wrote:
>>> On Feb 19, 2:18 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
>> [...]
>>> When I run I Python program, the Windows task manager shows both
>>> cores running (usually a 60/40 split) for an overall 50% usage.
>>> What am I actually seeing? If Python only uses one of the cores,
>>> why do both light up? Is everything much more complicated (due to
>>> OS scheduling, etc.) than the simple explanations of GIL?
>> You are seeing your Python program running on one core, and the usual
>> Windows crap keeping the other one busy.
>
> I thought of that, but the usual Windows crap accounts for only a
> couple percent prior to the Python program running. Christian Heimes
> answer sounds more realistic.
>
> But what do I know?
>
At least as much as I do, probably.

Tim Roberts

unread,
Feb 19, 2009, 10:35:36 PM2/19/09
to
Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
>
>C is basically never appropriate. C should be outlawed by Congress
>with the ban enforced by roving pie-throwing squads <wink>.

One of my favorite quotes:

The last good thing written in C was Schubert's Ninth Symphony.
--
Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

Steven D'Aprano

unread,
Feb 20, 2009, 1:57:07 AM2/20/09
to
Steve Holden wrote:

>> It's only an error if it gets in the way. It's the experience of a lot
>> of programmers that it doesn't, so it's not an error.
>>
> And it's not a feature of the language, rather of one or two
> implementations. Neither JPython not IronPython use a GIL to the best of
> my knowledge, so you are still quite at liberty to use them.

I found this interesting benchmark on the relative speeds of CPython 2.3,
IronPython 0.1 and Jython 2.1. It's from six years ago, so not exactly
reporting on the state of the art, but it suggests to me that IronPython is
faster at the fundamentals but much slower at some things.

http://www.python.org/~jeremy/weblog/031209a.html


--
Steven

Steven D'Aprano

unread,
Feb 20, 2009, 2:11:26 AM2/20/09
to
Paul Rubin wrote:

> How old is your computer, why did you buy it, and is it the first one
> you ever owned?
>
> For most of us, I suspect, it is not our first one, and we bought it
> to get a processing speedup relative to the previous one.

My computer is about eight months old, and I bought it because the previous
one died.


> If such
> speedups were useless or unimportant, we would not have blown our hard
> earned cash replacing perfectly good older hardware,

Oh the assumptions in that statement...

"Blowing hard-earned cash" assumes that people buy computers only when they
need to. That's certainly not true -- there's a lot of irrational purchases
involved. I have a friend who has just recently spent $2000 on storage so
he can store more games and videos, which he cheerfully admits he'll never
play or watch. He describes it as his "dragon's horde": it is for knowing
it's there, not for using.

Often hardware is upgraded because it's broken, or because you can't get
parts, or because the software you need will only run on newer machines. I
say *newer* rather than *faster*, because speed is only sometimes a factor
in why software won't run on old machines. My old Mac running a 68030 in
1990 ran Microsoft Word perfectly fast enough for even advanced word
processing needs, and nearly twenty years later, there's nothing I need
from a word processor that I couldn't do in 1990.


> so we have to
> accept the concept that speed matters and ignore those platitudes that
> say otherwise.

The *perception* that speed matters, matters. The reality is that the
majority of computing tasks outside of certain specialist niches are I/O
bound, not CPU. Office server software is rarely CPU bound, and when it is,
in my experience there's one rogue process using all the CPU: the software
is broken, and a faster CPU would just let it be broken at a faster speed.

Gamers need better graphics cards and more memory, not faster CPUs. Internet
surfers need faster ethernet, more bandwidth and more memory, not faster
CPUs. Graphics designers need bigger hard drives and more memory, not
faster CPUs. (Hmm. There seems to be a pattern there...)

Of course, there are a few niches that do require faster CPUs: video
editing, some (but by no means all) Photoshop filters, number crunching,
etc. But even for them, you can often get more bang-for-your-buck
performance increase by adding more memory.

Speaking for myself, I'd happily take a 20% slower CPU for more reliable,
faster DVD/CD burning. What do I care if it takes my computer 120ms to open
a window instead of 100ms, but I care a lot if it takes me 10 minutes to
make a coaster instead of 7 minutes to make a good disc.

--
Steven

Steven D'Aprano

unread,
Feb 20, 2009, 2:19:14 AM2/20/09
to
Steve Holden wrote:

> What Guido doesn't seem to have accepted yet is that slowing [C]Python
> down by 50% on a single-processor CPU will actually be a worthwhile
> tradeoff in ten years time, when nothing will have less than eight cores
> and the big boys will be running at 64 kilo-cores.

Ten years?

There's no guarantee that Python will still even be around in ten years. It
probably will be, I see no reason why it won't, but who knows? Maybe we'll
have mandatory software warranties tailored to suit the Microsofts and
Apples, and Guido and the PSF will be forced to abandon the language.

I think a design mistake would be to hamstring Python now for a hypothetical
benefit in a decade. But, well, in five years time, or three? Don't know.


--
Steven

Hendrik van Rooyen

unread,
Feb 20, 2009, 3:04:39 AM2/20/09
to pytho...@python.org
"Steve Holden" <ste...@..web.com> wrote:

> <heresy>Perhaps it's time Python stopped being a dictatorship?</heresy>

This will need a wholesale switch to the worship of Freya - It is rumoured
that She is capable of herding cats.

- Hendrik

rush...@gmail.com

unread,
Feb 20, 2009, 4:25:43 AM2/20/09
to
On 20 Şubat, 01:20, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:

> I would say, slow execution is a drawback that we put up with in order
> to gain benefits of Python programming that are mostly unrelated to
> the causes of the slowness.  The slowness itself can be addressed by
> technical means, such as native-code compilation and eliminating the
> GIL.  I believe (for example) that the PyPy project is doing both of
> these.

Do you believe that there is an effort for removing gil with pypy.
As i know there is not an intend to remove gil with pypy. GIL will
be possibly used in PyPy. There is a mistake in your reply or mine.

Thank you
Rushen

sturlamolden

unread,
Feb 20, 2009, 5:36:14 AM2/20/09
to
On Feb 20, 12:19 am, Mensanator <mensana...@aol.com> wrote:

> What am I actually seeing? If Python only uses one of the cores,
> why do both light up?

Because of OS scheduling. You have more than one process running. The
Python process does not stay on one core. Try to put CPython into a
tight loop ("while 1: pass"). You will see ~50% use of both cores. If
you had 4 cores, you would see ~25% use.


> Is everything much more complicated (due to
> OS scheduling, etc.) than the simple explanations of GIL?

No. Your Python code cannot use more than one core simultaneously.
It's just that scheduling happens so fast and so often that you don't
notice it.

Mensanator

unread,
Feb 20, 2009, 9:47:12 AM2/20/09
to
On Feb 20, 4:36�am, sturlamolden <sturlamol...@yahoo.no> wrote:
> On Feb 20, 12:19 am, Mensanator <mensana...@aol.com> wrote:
>
> > What am I actually seeing? If Python only uses one of the cores,
> > why do both light up?
>
> Because of OS scheduling. You have more than one process running. The
> Python process does not stay on one core. Try to put CPython into a
> tight loop ("while 1: pass"). You will see ~50% use of both cores. If
> you had 4 cores, you would see ~25% use.

Saw that once when I had access to a four core machine.

>
> > Is everything much more complicated (due to
> > OS scheduling, etc.) than the simple explanations of GIL?
>
> No.

Don't you mean "yes"?

> Your Python code cannot use more than one core simultaneously.
> It's just that scheduling happens so fast and so often that you don't
> notice it.

Or that the Task Manager can't track the switches fast
enough to show the interleaving giving the illusion that
both cores are operating simultaneously.

rush...@gmail.com

unread,
Feb 20, 2009, 3:35:58 PM2/20/09
to
I want to correct my last post where i said that there is not any
intend to remove GIL from python. There is an intend actually i wish
from a wizard :).
On the pypy blog there is an explanation about gil and pypy
"Note that multithreading in PyPy is based on a global interpreter
lock, as in CPython. I imagine that we will get rid of the global
interpreter lock at some point in the future -- I can certainly see
how this might be done in PyPy, unlike in CPython -- but it will be a
lot of work nevertheless. Given our current priorities, it will
probably not occur soon unless someone steps in."
Nothing new about GIL and Cpython and even PyPy

Thank you...
Rushen

Thank you...

Joshua Judson Rosen

unread,
Feb 21, 2009, 3:13:26 PM2/21/09
to
Paul Rubin <http://phr...@NOSPAM.invalid> writes:
>
> Right, that's basically the issue here: the cost of using multiple
> Python processes is unnecessarily high. If that cost were lower then
> we could more easily use multiple cores to make oru apps faster.

What cost is that? At least on unix systems, fork() tends have
*trivial* overhead in terms of both time and space, because the
processes use lazy copy-on-write memory underneath, so the real costs
of resource-consumption for spawning a new process vs. spawning a new
thread should be comparable.

Are you referring to overhead associated with inter-process
communication? If so, what overhead is that?

--
Don't be afraid to ask (Lf.((Lx.xx) (Lr.f(rr)))).

Paul Rubin

unread,
Feb 21, 2009, 5:02:27 PM2/21/09
to
Joshua Judson Rosen <roz...@geekspace.com> writes:
> > Right, that's basically the issue here: the cost of using multiple
> > Python processes is unnecessarily high.
> What cost is that?

The cost of messing with the multiprocessing module instead of having
threads work properly, and the overhead of serializing Python data
structures to send to another process by IPC, instead of using the
same object in two threads. Also, one way I often use threading is by
sending function objects to another thread through a Queue, so the
other thread can evaluate the function. I don't think multiprocessing
gives a way to serialize functions, though maybe something like it
can be done at higher nuisance using classes.

Hendrik van Rooyen

unread,
Feb 22, 2009, 3:09:39 AM2/22/09
to pytho...@python.org
"Paul Rubin" <http://phr...@NOSPAM.invalid> wrote:

> The cost of messing with the multiprocessing module instead of having
> threads work properly, and the overhead of serializing Python data
> structures to send to another process by IPC, instead of using the
> same object in two threads. Also, one way I often use threading is by
> sending function objects to another thread through a Queue, so the
> other thread can evaluate the function. I don't think multiprocessing
> gives a way to serialize functions, though maybe something like it
> can be done at higher nuisance using classes.

There are also Pyro and xmlrpc and shm. - all of them more apparent
hassle than threads, and all of them better able to exploit parallelism.

That said, this has made me think the following:

<conjecture>
It is an error to pass anything but plain data between processes,
as anything else does not scale easily.

Passing plain data between processes means either serialising
the data and using channels such as pipes or sockets, or
passing a pointer to a shared memory block through a similar
channel.

Following this leads to a clean design, while attempting to pass
higher order stuff quickly leads to convoluted code when
you try to make things operate in parallel.
<!conjecture>

The above can be crudely summed up as:

You can share and pass data, but you should not pass
or share code between processes.

Now the above is congruent with my own experience,
but I wonder if it is generally applicable.

- Hendrik


Aahz

unread,
Mar 5, 2009, 8:33:44 PM3/5/09
to
In article <mailman.99.12348638...@python.org>,
Hendrik van Rooyen <ma...@microcorp.co.za> wrote:
>"Ben Finney" <bignose+h...@benfinney.id.au> wrote:
>>aa...@pythoncraft.com (Aahz) writes:
>>> In article <mailman.52.12347978...@python.org>,
>>> Hendrik van Rooyen <ma...@microcorp.co.za> wrote:
>>>>
>>>>Occam was the language that should have won the marketing prize,
>>>>but didn't.
>>>
>>> It wasn't simple enough.
>>
>>*bdom-tsssh* <URL:http://en.wikipedia.org/wiki/Occam's_razor>
>
>If Aahz was trolling, then he got me. I know about William of Occam,
>after whom the language was named, and his razor, but did not make the
>association, and answered seriously.

Not trolling, but making a joke. Not always easy to tell the
difference, of course.
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

"All problems in computer science can be solved by another level of
indirection." --Butler Lampson

Hendrik van Rooyen

unread,
Mar 6, 2009, 2:03:01 AM3/6/09
to pytho...@python.org
"Aahz" <aa...@pythoncraft.com >wrote:

> In article <mailman.99.12348638...@python.org>,
> Hendrik van Rooyen <ma...@microcorp.co.za> wrote:

> >If Aahz was trolling, then he got me. I know about William of Occam,
> >after whom the language was named, and his razor, but did not make the
> >association, and answered seriously.
>
> Not trolling, but making a joke. Not always easy to tell the
> difference, of course.

*grin* and at the time, I did not get the point - I owe you a beer for
being obtuse.

- Hendrik

0 new messages