Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

using python interpreters per thread in C++ program

63 views
Skip to first unread message

grbgooglefan

unread,
Sep 7, 2009, 1:17:57 AM9/7/09
to
Hi

I've a multi-threaded C++ program, in which I want to use embedded
python interpreter for each thread. I am using Python 2.6.2 on Linux
for this.

When I tried to embed python interpreter per thread, I got crash when
the threads were calling Python's C APIs.

Can we not use python interpreters even private to each multiple
thread?

What is best way to embed python in multi-threaded C++ application?

Please guide.

sturlamolden

unread,
Sep 7, 2009, 1:42:27 AM9/7/09
to
On 7 Sep, 07:17, grbgooglefan <ganeshbo...@gmail.com> wrote:


> What is best way to embed python in multi-threaded C++ application?

Did you remeber to acquire the GIL? The GIL is global to the process
(hence the name).

void foobar(void)
{
PyGILState_STATE state = PyGILState_Ensure();

/* Safe to use Python C API here */

PyGILState_Release(state);
}


S.M.

> Please guide.

sturlamolden

unread,
Sep 7, 2009, 1:55:03 AM9/7/09
to
On 7 Sep, 07:17, grbgooglefan <ganeshbo...@gmail.com> wrote:

> Can we not use python interpreters even private to each multiple
> thread?

You can use multiple interpreters, but they share GIL. For example,
Python extension modules are DLLs and will be loaded only once for
each process - the OS makes sure of that. Since they are objects too,
there can only be one GIL per process.

The same goes for files: If an interpreter calls close on a file
handle, it is closed for the whole process, not just locally to the
interpreter. Global synchronization is therefore needed. (Maybe not
for your threaded app, but for any conceivable use of Python.)

If you need two isolated interpreters, you need to run them in
different processes.

Message has been deleted

ganesh

unread,
Sep 7, 2009, 1:59:15 AM9/7/09
to
> Did you remeber to acquire the GIL? The GIL is global to the process
> (hence the name).

No, I did not use GIL.
-- For using GIL, do we need to initialize GIL at startup and destroy/
finalize it at end?
-- Are there any configuration & build related flags that I need to
use to make this work?

Please guide. Thanks.

sturlamolden

unread,
Sep 7, 2009, 2:04:08 AM9/7/09
to
On 7 Sep, 07:59, ganesh <ganeshbo...@gmail.com> wrote:

> No, I did not use GIL.
> -- For using GIL, do we need to initialize GIL at startup and destroy/
> finalize it at end?
> -- Are there any configuration & build related flags that I need to
> use to make this work?
>
> Please guide. Thanks.

I just showed you how...

ganesh

unread,
Sep 7, 2009, 3:28:49 AM9/7/09
to
On Sep 7, 2:04 pm, sturlamolden <sturlamol...@yahoo.no> wrote:
> I just showed you how...

Modified the thread function to use these APIs, but the call to
PyGILState_Ensure() is not returning at all.

void *callPyFunction(void * arg)
{
// Method two to get function eval
long thridx=(long)arg;
printf("\n---->my num=%d, calling showstr pyfunction\n",thridx);
char callpyf[] = "showstr(100)\n";
PyGILState_STATE state = PyGILState_Ensure();
printf("after PyGILState_Ensure\n");
PyObject* pycall = PyRun_String(callpyf,Py_file_input,glb, loc);
if(pycall == NULL || PyErr_Occurred()){
printf("PyRun_String failed\n");
PyErr_Print();
} else
printf("%d thread called showstr pyfunction ok\n",thridx);
PyGILState_Release(state);
pthread_exit(NULL);
}

Graham Dumpleton

unread,
Sep 7, 2009, 3:41:54 AM9/7/09
to

You can't use that in this case as they specifically are talking about
a interpreter per thread, which implies creating additional sub
interpreters. The simplified GIL state API you mentioned only works
for threads operating in the main (first) interpreter created within
the process.

The OP can do what they want, but they need to user lower level
routines for creating their own thread state objects and acquiring the
GIL against them.

Graham

ganesh

unread,
Sep 7, 2009, 4:47:41 AM9/7/09
to
On Sep 7, 3:41 pm, Graham Dumpleton <graham.dumple...@gmail.com>
wrote:

> On Sep 7, 3:42 pm, sturlamolden <sturlamol...@yahoo.no> wrote:
> interpreters. The simplified GIL state API you mentioned only works
> for threads operating in the main (first) interpreter created within
> the process.

I modified my program to have Py_Initialize and compilation of one
Python function done in main() thread. Then I am calling only that
function in callPyFunction() thread. But, this thread does not come
out of PyGILState_Ensure() function.

> The OP can do what they want, but they need to user lower level
> routines for creating their own thread state objects and acquiring the
> GIL against them.
>
> Graham

What are the "lower level routines" for creating own thread state
objects & acquiring GILs.
Also, where can I find more information about those routines?

Please guide. Thanks.

Graham Dumpleton

unread,
Sep 7, 2009, 6:49:00 AM9/7/09
to

Documentation is at:

http://docs.python.org/c-api/init.html

Are you really using sub interpreters though? There is no evidence of
that in the code you posted earlier.

Sure you just don't understand enough about it and so are using the
wrong terminology and all you really want to do is run externally
created threads through the one interpreter.

Using sub interpreters is not for the feint of heart and not something
you want to do unless you want to understand Python C API internals
very well.

Graham

Ulrich Eckhardt

unread,
Sep 7, 2009, 7:17:32 AM9/7/09
to
ganesh wrote:
>> Did you remeber to acquire the GIL? The GIL is global to the process
>
> No, I did not use GIL.
>
> -- Why do we need to use GIL even though python is private to each
> thread?

Quoting from above: "The GIL is global to the process". So no, it is NOT
private to each thread which means "python" isn't either.

At least that is my understanding of the issue.

Uli

--
Sator Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

ganesh

unread,
Sep 7, 2009, 7:53:10 AM9/7/09
to
Actually, I modified my program to have a single shared Py-interpreter
across all threads to test the usage of GIL. So, I did Py_Initialize
in main() function and only called that python function in different
threads.

But this is not the way I want to use interpreters in my code.

I am looking for using sub-interpreters, only that I did not know this
this term, till you mentioned it here.
So bvefore this, I was calling py_Initialize in each of the C++ level
threads, which was wrong.

I should be using Py_NewInterpreter() in my threads and Py_Initialize
() in main() thread. Please correct me if I am wrong.

>>>>not something you want to do unless you want to understand Python C API internals very well.

Are these APIs really tricky to use with C++ embedding? Can't they be
implemented thru C APIs?

I need to use these to get the proper concurrency in my multi-threaded
application without any synchronization mechanisms.

sturlamolden

unread,
Sep 7, 2009, 8:13:54 AM9/7/09
to
On 7 Sep, 13:53, ganesh <ganeshbo...@gmail.com> wrote:

> I need to use these to get the proper concurrency in my multi-threaded
> application without any synchronization mechanisms.

Why will multiple interpreters give you better concurrency? You can
have more than one thread in the same interpreter.

Here is the API explained:

http://docs.python.org/c-api/init.html
http://www.linuxjournal.com/article/3641

sturlamolden

unread,
Sep 7, 2009, 8:43:59 AM9/7/09
to
On 7 Sep, 13:17, Ulrich Eckhardt <eckha...@satorlaser.com> wrote:

> Quoting from above: "The GIL is global to the process". So no, it is NOT
> private to each thread which means "python" isn't either.
>
> At least that is my understanding of the issue.

Strictly speaking, the GIL is global to the Python DLL, not the
process. You can load it multiple times from different paths, and if
you do GIL with not be shared.

If you make several clean installs of Python on Windows (say c:
\Python26-0 ... c:\Python26-3), you can embed multiple interpreters in
a process, and they will not share GIL or anything else. But be
careful: all *.pyd files must reside in these directories, or they
will be loaded once and refcounts screw up. The Python DLL must also
be in these directories, not in c:\windows\system32. It is in fact
possible to make Python utilize dual- and quad-core processors on
Windows this way. You can even use ctypes for embedding Python into
Python, so no C is required. See:

http://groups.google.no/group/comp.lang.python/browse_thread/thread/2d537ad8df9dab67/812013f9ef3a766d

To make this hack really work one need multiple complete installs, not
just copies of one DLL as I assumed in the thread. But the general
method is the same.

You can see this as a form of primitive object orientation :P


Sturla Molden

MRAB

unread,
Sep 7, 2009, 8:50:07 AM9/7/09
to pytho...@python.org

CPython's GIL means that multithreading on multiple processors/cores has
limitations. Each interpreter has its own GIL, so processor-intensive
applications work better using the multiprocessing module than with the
threading module.

sturlamolden

unread,
Sep 7, 2009, 9:10:19 AM9/7/09
to
On 7 Sep, 14:50, MRAB <pyt...@mrabarnett.plus.com> wrote:

> CPython's GIL means that multithreading on multiple processors/cores has
> limitations. Each interpreter has its own GIL, so processor-intensive
> applications work better using the multiprocessing module than with the
> threading module.

We incur a 200x speed-penalty from Python if the code is CPU-bound.
How much do you gain from one extra processor? Start by adding in some
C, and you will gain much more performance, even without parallel
processing.

Processor-intensive code should therefore use extension libraries that
release the GIL. Then you have option of getting parallel concurrency
by Python threads or OpenMP in C/Fortran. Most processor-intensive
code depend on numerical libraries written in C or Fortran anyway
(NumPy, ATLAS, LAPACK, FFTW, GSL, MKL, etc.) When you do this, the GIL
does not get in your way. The GIL is infact an advantage if one
library happen not to be thread safe. Many old Fortran 77 libraries
have functions with re-entrancy issues, due to SAVE attribute on
variables. Thus, the GIL is often an advantage for processor-intensive
code.

multiprocessing is fragile and unreliable, btw.


Mark Hammond

unread,
Sep 7, 2009, 6:31:37 PM9/7/09
to pytho...@python.org
On 7/09/2009 10:50 PM, MRAB wrote:
> CPython's GIL means that multithreading on multiple processors/cores has
> limitations. Each interpreter has its own GIL, so processor-intensive
> applications work better using the multiprocessing module than with the
> threading module.

I believe you will find the above is incorrect - even with multiple
interpreter states you still have a single GIL.

Mark

Benjamin Kaplan

unread,
Sep 7, 2009, 6:51:37 PM9/7/09
to pytho...@python.org
On Mon, Sep 7, 2009 at 6:31 PM, Mark Hammond<skippy....@gmail.com> wrote:
> On 7/09/2009 10:50 PM, MRAB wrote:
>>
>> sturlamolden wrote:
>>>
>> CPython's GIL means that multithreading on multiple processors/cores has
>> limitations. Each interpreter has its own GIL, so processor-intensive
>> applications work better using the multiprocessing module than with the
>> threading module.
>
> I believe you will find the above is incorrect - even with multiple
> interpreter states you still have a single GIL.
>
not according to the docs.

http://docs.python.org/library/multiprocessing.html :

multiprocessing is a package that supports spawning processes using an
API similar to the threading module. The multiprocessing package
offers both local and remote concurrency, effectively side-stepping
the Global Interpreter Lock by using subprocesses instead of threads.
Due to this, the multiprocessing module allows the programmer to fully
leverage multiple processors on a given machine. It runs on both Unix
and Windows.
> Mark
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Grant Edwards

unread,
Sep 7, 2009, 7:16:24 PM9/7/09
to
On 2009-09-07, Mark Hammond <skippy....@gmail.com> wrote:

>> CPython's GIL means that multithreading on multiple
>> processors/cores has limitations. Each interpreter has its own
>> GIL, so processor-intensive applications work better using the
>> multiprocessing module than with the threading module.
>
> I believe you will find the above is incorrect - even with
> multiple interpreter states you still have a single GIL.

Please explain how multiple processes, each with a separate
Python interpreter, share a single GIL.

--
Grant

Mark Hammond

unread,
Sep 7, 2009, 7:28:38 PM9/7/09
to Grant Edwards, pytho...@python.org


Sorry, my mistake, I misread the original - using multiple Python
processes does indeed have a GIL per process. I was referring to the
'multiple interpreters in one process' feature of Python which is
largely deprecated, but if used, all 'interpreters' share the same GIL.

To clarify: in a single process there will only ever be one GIL, but in
multiple processes there most certainly will be multiple GILs.

Apologies for the confusion...

Cheers,

Mark

Grant Edwards

unread,
Sep 7, 2009, 9:31:39 PM9/7/09
to
On 2009-09-07, Mark Hammond <skippy....@gmail.com> wrote:

> Sorry, my mistake, I misread the original - using multiple
> Python processes does indeed have a GIL per process. I was
> referring to the 'multiple interpreters in one process'
> feature of Python which is largely deprecated, but if used,
> all 'interpreters' share the same GIL.

Oh yea, i had forgotten you could do that. I can see how one
could have interpretted the "multiple instances" references to
mean multiple interpreter instances within a process.

--
Grant

ganesh

unread,
Sep 7, 2009, 10:22:17 PM9/7/09
to
My application is a TCP server having multiple client connectons. C++
PTHREADS are for each connected socket and the message received on the
socket is evaluated by python functions.
If I use only one process level python interpreter, then every thread
has to lock the GIL & so blocking the other threads from executing the
python code even if it is not the same python function that locking
thread is calling.

-- That's why I tried using python interpreters per thread. But that
also required GIL locking & so cannot be used.

-- I cannot use python threads inside the Pyton intrepreter because
then I will have to have some mechanism for communiaction between C++
Pthreads with these python threads.

I think there is no way that we can achieve this because of the GIL
being a process level state. Least I can do is have one python
interpreter initialized in main thread and lock the GIL in every
thread for python calls.

I V

unread,
Sep 8, 2009, 2:46:01 AM9/8/09
to
On Mon, 07 Sep 2009 19:22:17 -0700, ganesh wrote:

> My application is a TCP server having multiple client connectons. C++
> PTHREADS are for each connected socket and the message received on the
> socket is evaluated by python functions. If I use only one process level

Do you have to use threads? If you use a process per connection, rather
than a thread, each process will have its own GIL.

sturlamolden

unread,
Sep 8, 2009, 3:10:50 AM9/8/09
to
On 8 Sep, 04:22, ganesh <ganeshbo...@gmail.com> wrote:

> My application is a TCP server having multiple client connectons. C++
> PTHREADS are for each connected socket and the message received on the
> socket is evaluated by python functions.
> If I use only one process level python interpreter, then every thread
> has to lock the GIL & so blocking the other threads from executing the
> python code even if it is not the same python function that locking
> thread is calling.

Usually, TCP-servers are I/O bound. You can safely use a single Python
process for this. A function evaluating a request will hold the GIL
for a while (but not until it's done). But most other threads will be
blocked waiting for I/O. Thus, there will be little contention for the
GIL anyway, and it should not affect scalability much. Only when
multiple requests are processed simultaneously will threre be
contention for the GIL. You can create high-performance TCP servers in
plain Python using e.g. Twisted. If you are in the strange situation
that a TCP server is compute-bound, consider using multiple processes
(os.fork or multiprocessing).


> I think there is no way that we can achieve this because of the GIL
> being a process level state. Least I can do is have one python
> interpreter initialized in main thread and lock the GIL in every
> thread for python calls.

I think you will find out your server is indeed I/O bound, like 99.9%
or all other TCP servers on this planet. Try to embed a single
interpreter first. Use the simplified GIL API I showed you. Most
likely you will find that it suffice.

If you need something more scalable, associate each pthread with a
separate Python process - e.g. using a named pipe on Windows or Unix
domain socket on Linux.


ganesh

unread,
Sep 8, 2009, 3:14:01 AM9/8/09
to
On Sep 8, 2:46 pm, I V <ivle...@gmail.com> wrote:
> Do you have to use threads? If you use a process per connection, rather
> than a thread, each process will have its own GIL.

No, i cannot change from threads to processes for handling
connections. This will change the complete design of our application
which is not feasilbe for python evaluation of the strings.

sturlamolden

unread,
Sep 8, 2009, 3:16:30 AM9/8/09
to
On 8 Sep, 08:46, I V <ivle...@gmail.com> wrote:

> Do you have to use threads? If you use a process per connection, rather
> than a thread, each process will have its own GIL.

If ganesh is using Linux or Unix (which pthreads indicate), fork() is
just as efficient as threads.

On Windows one would need to keep a farm of prespawned Python
processes, connected with pipes to the main server.


sturlamolden

unread,
Sep 8, 2009, 3:23:02 AM9/8/09
to
On 8 Sep, 09:14, ganesh <ganeshbo...@gmail.com> wrote:

> No, i cannot change from threads to processes for handling
> connections. This will change the complete design of our application
> which is not feasilbe for python evaluation of the strings.

So the problem is actually bad design?

Graham Dumpleton

unread,
Sep 8, 2009, 3:37:40 AM9/8/09
to
On Sep 8, 9:28 am, Mark Hammond <skippy.hamm...@gmail.com> wrote:
> I was referring to the
> 'multiple interpreters in one process' feature of Python which is
> largely deprecated, ...

Can you please point to where in the documentation for Python it says
that support for multiple interpreters in one process is 'largely
deprecated'.

I know that various people would like the feature to go away, but I
don't believe I have ever seen an official statement from Guido or
other person in a position to make one, state that the official view
was that the API was deprecated.

Even in Python 3.1 the documentation for the APIs seems to merely
state some of the limitations and that it is a hard problem, even
still saying that problem would be addressed in future versions.

Graham

0 new messages