Hi,
I'll try to explain how I understand your question, and how I would approach this issue. Please correct me if I am wrong. There are many possible ways of using parallel computational power, and the best solution heavily depends upon your specific problem. I am far from an expert, but from the little that I've learned over the past years I know that there is no easy answer.
1) Spyder uses multi-threading on the application level (using QThread if I understand correctly) so things like code completion, monitor, documentation lookup, ... doesn't freeze the whole program while doing. However, this is not related to running your python scripts. Spyders multiprocessing implementation is, as far as I understand it, not related at all to the scripts you want to run. Python/IPython consoles are run in process that is separate from the main spyder process.
When you want to use the parallel power of your computer cluster, you need to launch a python or ipython console in spyder that is aware of all those CPUs/GPUs. From there you can use the built in modules threading and multiprocessing to start explicitly using all that computational power. Note that depending on what you are trying to achieve, programming parallel applications can be challenging. The summer school "Advanced Scientific Programming in Python" [1] has some very nice lectures on this, and I would recommend you to have a look at those informative slides.
2) I don't think spyder uses http to connect to other python consoles, not sure how http is related to the problem at hand
3) interactive python console refers to an interactive work-flow and does not necessarily has anything to do with qsub (I assume you refer to your clusters mechanism to submit and queue jobs?)
When I look at how we have our clusters configured, I could theoretically imagine the following work flow:
* launch an IPython instance on the cluster with qsub and let it use as many CPUs as you see fit (for instance, specify #PBS -lnodes=xxx:ppn=yyy in your PBS script), and give it as much wall time as you think you need.
* connect to that IPython console using the notebook/web interface (I know people do that, I just don't know how. There has to be some documentation available somewhere, or ask on the IPython mailinglist, or see [8])
* Within Spyder, connect to the IPython console running on the cluster, but I don't think that is already implemented, not sure if that's planned.
* or, checkout [8]: Using IPython for parallel computing
* rent an IPython instance on cluster configured for you at [11]
You can release the GIL when using Cython, see for example the slides on Cython [1]. You can not release the GIL in a Python script. For explicit concurrency directly in your Python script, use the build in modules threading and multiprocessing.
Numpy/Scipy/Numba and other Python modules already use, in some cases, the parallel power of your machine. Some examples on Numba are can be found here [2] and here [3]: For Numpy/Scipy, this is dependent on your BLAS/LAPACK implementation (such as MKL, OpenBlas, ACML): the low level number crunching routines on which they are built. Building NumPy against a BLAS/LAPACK implementation optimized for your machine from source can be challenging depending on your experience/skills [9] [10], but performance can be increased significantly [4].
Other more exotic libraries that can help unleash parallel power of CPUs/GPUs (for which I only know they exist): Magma [5], Plasma [6], CUBLAS [7].