Bjoern super-fast event-loop WSGI server

626 views
Skip to first unread message

Branko Vukelić

unread,
Dec 24, 2010, 9:59:43 AM12/24/10
to we...@googlegroups.com
From the github repo[1]:

bjoern aims to be small, lightweight and very fast.

* less than 1000 SLOC (Source Lines Of Code)
* memory footprint smaller than a megabyte
* no threads, coroutines or other crap
* apparently the fastest WSGI server out there
* 100% WSGI compliant (except for the write callback design mistake)


[1] https://github.com/jonashaag/bjoern

--
Branko Vukelic

stu...@brankovukelic.com
http://www.brankovukelic.com/

Greg Milby

unread,
Dec 24, 2010, 12:20:55 PM12/24/10
to we...@googlegroups.com

1 nvr RAM.  Wow

::Sent from my Verizon HTC incredible gOS::

> --
> You received this message because you are subscribed to the Google Groups "web.py" group.
> To post to this group, send email to we...@googlegroups.com.
> To unsubscribe from this group, send email to webpy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/webpy?hl=en.
>

Alice Bevan–McGregor

unread,
Dec 25, 2010, 3:10:54 AM12/25/10
to we...@googlegroups.com
Attempting to get it running locally for comparitave benchmarking, I
discovered that bjoern has an unannounced dependancy on libev, and
browsing the source, it's 100% C. (Thus no portability at all to
Jython, IronPython, etc.)

Interesting idea, though as far as I can tell it's HTTP/1.0 only. :/

Does anyone have any numbers they could share? I'm having some
difficulty compiling it against ports' libev installation. (Hello world
comparisons against Paste and Tornado interest me, though comparisons
against marrow.server.http are my final goal.)

- Alice.


Branko Vukelić

unread,
Dec 25, 2010, 6:01:51 AM12/25/10
to we...@googlegroups.com
On Sat, Dec 25, 2010 at 9:10 AM, Alice Bevan–McGregor
<al...@gothcandy.com> wrote:
> Attempting to get it running locally for comparitave benchmarking, I
> discovered that bjoern has an unannounced dependancy on libev, and browsing

Yeah, got bitten by libev myself. :) Filed a 'bug', and README now fixed. ;)

> the source, it's 100% C.  (Thus no portability at all to Jython, IronPython,
> etc.)

If it turns out to be fast (enough) for me, I'm happy. I wasn't
looking for a pure Python solution. I was looking for a wsgi server
that doesn't have the "not for production use" label. This one was the
first I came across.

Branko Vukelić

unread,
Dec 25, 2010, 6:02:54 AM12/25/10
to we...@googlegroups.com
2010/12/25 Branko Vukelić <stu...@brankovukelic.com>:

> If it turns out to be fast (enough) for me, I'm happy. I wasn't
> looking for a pure Python solution. I was looking for a wsgi server
> that doesn't have the "not for production use" label. This one was the
> first I came across.

Of course, not couting Cherrypy which officially _is_ for production
use, but many people look down on it.

Alice Bevan–McGregor

unread,
Dec 25, 2010, 6:19:03 AM12/25/10
to we...@googlegroups.com
> Of course, not couting Cherrypy which officially _is_ for production
> use, but many people look down on it.

Or Paste, but it's hardly performant. Just mentioning it for
completness sake. ;) All of my production deployments utilize
multi-process WSGI<->FastCGI and on-disk sockets.

After the changes I'm making to marrow.server.http[1] tonight it may be
easier to back-port it to support WSGI 1 (PEP 333). I'm mentioning
this because in my tests m.s.http handles C10K, is able to process >
10Krsecs at C6K or so, is fully unit tested, and has complete
documentation. It hasn't been -around- long enough to be classified
"stable", but it supports HTTP/1.1, Python 2.6+, and Python 3.1+ fully.
;)

Unfortunately for most people, it currently only supports PEP 444[2]
(with modifications[3]) and, soon, my draft rewrite[4].

Hell, creating a middleware-based adapter from WSGI 1 to my version of
PEP 444 should be relatively straight-forward, considering the same
incompatibility that bjoern takes exception to (the writer function
returned by start_response, or rather, the lack of it).

- Alice.

[1] http://bit.ly/fLfamO
[2] http://www.python.org/dev/peps/pep-0444/
[3] http://bit.ly/fRyMJ2
[4] http://bit.ly/e7rtI6

^ Bit.ly'd because of stupidly long GitHub URLs.


Branko Vukelić

unread,
Dec 25, 2010, 8:13:02 AM12/25/10
to we...@googlegroups.com
On Sat, Dec 25, 2010 at 12:19 PM, Alice Bevan–McGregor
<al...@gothcandy.com> wrote:
> After the changes I'm making to marrow.server.http[1] tonight it may be
> easier to back-port it to support WSGI 1 (PEP 333).  I'm mentioning this
> because in my tests m.s.http handles C10K, is able to process > 10Krsecs at
> C6K or so, is fully unit tested, and has complete documentation.  It hasn't
> been -around- long enough to be classified "stable", but it supports
> HTTP/1.1, Python 2.6+, and Python 3.1+ fully. ;)

Impressive. @_@

Alice Bevan–McGregor

unread,
Dec 25, 2010, 9:21:42 AM12/25/10
to we...@googlegroups.com
> Impressive. @_@

I wrote it as an experiment and "reference implementation" for PEP 444,
expecting crap performance and at most possible use as a local
development server. And I haven't benchmarked it with pipelining
enabled yet. :/ (The protocol implementation is only 172
instructions, as reported by coverage on Py2.7.)

Unfortunately a minimum requirement of Python 2.6 is a deal breaker for
the majority of production deployments. :( If we can't get 2.6 (one
minor version behind) in production, I have no idea how long it'll take
Python 3 to get into production. 10 years? :'(

- Alice

(As an aside, Python 2.6 and 3.1 are on all of my production servers,
yay Gentoo!)


Branko Vukelić

unread,
Dec 25, 2010, 9:51:17 AM12/25/10
to we...@googlegroups.com
On Sat, Dec 25, 2010 at 3:21 PM, Alice Bevan–McGregor
<al...@gothcandy.com> wrote:
> Unfortunately a minimum requirement of Python 2.6 is a deal breaker for the

Personally, I develop all my projects in Py2.7 atm, so it's great news anyway.

Greg Milby

unread,
Dec 25, 2010, 11:06:52 AM12/25/10
to we...@googlegroups.com
did anyone find the tweaks to make bjoen work?

2010/12/25 Branko Vukelić <stu...@brankovukelic.com>
--

Branko Vukelić

unread,
Dec 25, 2010, 11:48:41 AM12/25/10
to we...@googlegroups.com
It works for me.

Alice Bevan–McGregor

unread,
Dec 25, 2010, 4:38:35 PM12/25/10
to we...@googlegroups.com
> It works for me.

Alas, on my Mac setup, I've had to give up for the time being, see:

https://github.com/jonashaag/bjoern/issues/#issue/11

- Alice.


Branko Vukelić

unread,
Dec 25, 2010, 4:41:24 PM12/25/10
to we...@googlegroups.com

Yeah, I've seen it. That's too bad. I was kinda hoping you'd post the
numbers to see if it amounts to to 'super-fast'. :)

Branko Vukelić

unread,
Dec 25, 2010, 4:42:17 PM12/25/10
to we...@googlegroups.com
2010/12/25 Branko Vukelić <stu...@brankovukelic.com>:

> On Sat, Dec 25, 2010 at 10:38 PM, Alice Bevan–McGregor
> <al...@gothcandy.com> wrote:
>>> It works for me.
>>
>> Alas, on my Mac setup, I've had to give up for the time being, see:
>>
>>        https://github.com/jonashaag/bjoern/issues/#issue/11
>
> Yeah, I've seen it. That's too bad. I was kinda hoping you'd post the
> numbers to see if it amounts to to 'super-fast'. :)

Speaking of which, is it possible to run wsgi1 apps on it? I currently
use bottle for one of the projects.

Alice Bevan–McGregor

unread,
Dec 25, 2010, 7:20:21 PM12/25/10
to we...@googlegroups.com
On 2010-12-25 13:42:17 -0800, Branko Vukelić said:

> Speaking of which, is it possible to run wsgi1 apps on it? I currently
> use bottle for one of the projects.

Do you mean run WSGI 1 apps on m.s.http? No. Creating a
middleware-style adapter shouldn't be too difficult, though.

- Alice.


Branko Vukelić

unread,
Dec 25, 2010, 7:40:37 PM12/25/10
to we...@googlegroups.com

So basically, add a middleware that translates the API between my
wsgi1 app and the wsgi2 server. Makes sense. Although it also makes
sense to just use a wsgi1 server. :D

Angelo Gladding

unread,
Jan 1, 2011, 5:05:41 AM1/1/11
to we...@googlegroups.com
On Sat, Dec 25, 2010 at 12:10 AM, Alice Bevan–McGregor
<al...@gothcandy.com> wrote:
> Interesting idea, though as far as I can tell it's HTTP/1.0 only.  :/

To be fair he does say "Not HTTP/1.1 capable (yet)." Is it the
pipelining that you'd miss?

> Does anyone have any numbers they could share?  I'm having some difficulty
> compiling it against ports' libev installation. (Hello world comparisons
> against Paste and Tornado interest me, though comparisons against
> marrow.server.http are my final goal.)

http://nichol.as/benchmark-of-python-web-servers told me to use
`gevent` as it scored well and greenlets sounded attractive for a
potential faraway leveraging of a [stackless] PyPy (as its most recent
version has surpassed CPython in at least a few areas of performance).

Since then I have crossed paths with bjoern as many others have (YC
News, Proggit, here).

- - -

My application consists of a modified clone of web.py sans the bloat.
I never thought I'd say such a thing of the anti-framework framework.
Full disclosure: I've implemented my own `web.application`, borrowed
heavily from `web.utils`/`web.http`, reimplemented most of
`web.webapi` keeping with the style, and carried over the holy
`web.template`.

During the flux of transitioning away from lighty/web.py toward a more
performant combination of one of at least a dozen Python servers and a
web.py-esque application-style I have temporarily reduced my project's
code base to its bare essentials. Profiling, benchmarking, and browser
auditing will follow as I place each feature back into its proper
place. Bjoern and this e-mail have both come at the start of this
process. I'll be leaving the spawn code for `marrow`, `gevent`, and
`bjoern` in place for stress testing alongside all other testing.

A bit about the current level of complexity is important to understand
the results:

Currently a middleware sanitizes the `environ` into a nested storage
of request and response data parsing `accept*` headers, parsing
`user-agent` against `browscap` to determine browser capabilities,
geolocating according to `REMOTE_ADDR`, and checking if AJAX request
using `X-Requested-With`. No cookie/session handling has been
reintroduced yet.

The landing page does one of two things depending on context and
allows for two separate testing environments.

1. If it is requested by a javascript-capable user-agent it responds
with a skeleton and jQuery. jQuery is set to never expire from cache.
Upon document's ready state four AJAX requests are made for the four
main content areas of the page. One of the requests is set to stream
using generator syntax (yield) which triggers the appropriate chunked
transfer-encoding which is properly handled by supplying jQuery with a
modified XHR object. (a la Facebook's `BigPipe`)

2. If it is requested by a non-javascript-capable user-agent it
precompiles the output from the four page handlers internally and
flushes a complete page in one shot. jQuery is obviously /not/ fetched
(in `ab` or `lynx`).

My data backend is memcached[b!] run in threaded mode which boasts
something like 60k reads/sec. I am /not/ aware of any server-level
internal caching and I'm using the standard (non c-based)
python-memcache lib to interface. A single key of ~.3Kb is fetched
each request.

All `web.template`s are *precompiled* at application boot. Each of
four content handler are produced via templates. The generator yields
4 templates. That is 7 total. The landing utilizes these according to
the rules above. The entire product is wrapped in a base template just
before response. 9 template calls in total.

- - -

To run the following dirty benchmark on a single core eight year old
Dell with 1.5GB generic RAM that's been running Ubuntu for ~60 hours
with a leaking video card while watching (decoding) an Xvid and
leeching/seeding with ~12 peers over BitTorrent with Opera and Chrome
open as well as Firefox, with 50 (fifty) tabs open, including Gmail,
and developer panes open in all three, with `screen` open in terminal
and twelve separate user sessions active including one `htop` showing
VLC and Firefox beating each other up for 50-90% of my CPU and
833/1127MB of RAM currently consumed, I'm going to run `ab` with 25
concurrent connections (-c 25).

Without further ado..

- - -

## MARROW FOUR WORKES #####

FF 3.6.1 (AJAX)
unprimed: 6 requests, 77.8 KB (0 from cache), 4.68s (onload: 4.89s)
primed: 6 requests, 77.8 KB (76.8 KB from cache), 3.93s (onload: 4.11s)

AB (1K)
Document Path: /
Document Length: 944 bytes

Concurrency Level: 25
Time taken for tests: 170.002 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 1015000 bytes
HTML transferred: 944000 bytes
Requests per second: 5.88 [#/sec] (mean)
Time per request: 4250.044 [ms] (mean)
Time per request: 170.002 [ms] (mean, across all concurrent requests)
Transfer rate: 5.83 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 5.8 0 46
Processing: 699 4227 1497.8 4276 7654
Waiting: 4 49 81.9 35 596
Total: 699 4228 1498.1 4276 7654

Percentage of the requests served within a certain time (ms)
50% 4276
66% 4984
75% 5335
80% 5545
90% 6238
95% 6691
98% 7164
99% 7401
100% 7654 (longest request)

## GEVENT (GREENLETS, 10K MAX POOL, PYTHON WSGI) #####

FF 3.6.1 (AJAX)
unprimed: 6 requests, 77.8 KB (0 from cache), 4.04s (onload: 4.26s)
primed: 6 requests, 77.8 KB (76.8 KB from cache), 4.26s (onload: 4.53s)

AB (1K)
Document Path: /
Document Length: 944 bytes

Concurrency Level: 25
Time taken for tests: 158.792 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 1092000 bytes
HTML transferred: 944000 bytes
Requests per second: 6.30 [#/sec] (mean)
Time per request: 3969.790 [ms] (mean)
Time per request: 158.792 [ms] (mean, across all concurrent requests)
Transfer rate: 6.72 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.5 0 8
Processing: 170 3901 914.4 3432 5765
Waiting: 14 3750 886.1 3305 5559
Total: 178 3901 914.2 3432 5765

Percentage of the requests served within a certain time (ms)
50% 3432
66% 3686
75% 4481
80% 4945
90% 5553
95% 5633
98% 5687
99% 5730
100% 5765 (longest request)

## GEVENT (GREENLETS, 10K MAX POOL, C WSGI) #####

FF 3.6.1 (AJAX)
unprimed: 6 requests, 77.7 KB (0 from cache), 1.91s (onload: 2.19s)
primed: 6 requests, 77.7 KB (76.8 KB from cache), 1.7s (onload: 1.98s)

AB (10K)
Document Path: /
Document Length: 944 bytes

Concurrency Level: 25
Time taken for tests: 81.476 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 10550000 bytes
HTML transferred: 9440000 bytes
Requests per second: 122.74 [#/sec] (mean)
Time per request: 203.690 [ms] (mean)
Time per request: 8.148 [ms] (mean, across all concurrent requests)
Transfer rate: 126.45 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.3 0 30
Processing: 11 203 60.9 196 417
Waiting: 1 202 60.0 194 417
Total: 27 203 61.0 196 417

Percentage of the requests served within a certain time (ms)
50% 196
66% 238
75% 244
80% 251
90% 280
95% 316
98% 345
99% 359
100% 417 (longest request)

## BJOERN FOUR WORKERS #####

FF 3.6.1 (AJAX)
unprimed: 6 requests, 77.8 KB (0 from cache), 1.8s (onload: 2.05s)
primed: 6 requests, 77.8 KB (76.8 KB from cache), 1.53s (onload: 1.78s)

## BJOERN SINGLE WORKER #####

FF 3.6.1 (AJAX)
unprimed: 6 requests, 77.8 KB (0 from cache), 1.82s (onload: 2.03s)
primed: 6 requests, 77.8 KB (76.8 KB from cache), 1.56s (onload: 1.78s)

AB (10K)
Document Path: /
Document Length: 944 bytes

Concurrency Level: 25
Time taken for tests: 51.039 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 10150000 bytes
HTML transferred: 9440000 bytes
Requests per second: 195.93 [#/sec] (mean)
Time per request: 127.597 [ms] (mean)
Time per request: 5.104 [ms] (mean, across all concurrent requests)
Transfer rate: 194.21 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.3 0 28
Processing: 4 127 38.1 106 278
Waiting: 1 127 37.6 106 275
Total: 29 127 38.3 106 278

Percentage of the requests served within a certain time (ms)
50% 106
66% 130
75% 163
80% 167
90% 179
95% 197
98% 232
99% 248
100% 278 (longest request)

- - -

"BJOERN IS SCREAMINGLY FAST AND ULTRA-LIGHTWEIGHT." He's right.

bjoern is up to 35 times faster than marrow and gevent w/ py-wsgi and
1.5 times faster than gevent w/ c-wsgi. c truly is a requirement for
any kind of real speed.

- - -

> I'm mentioning this because in my tests m.s.http handles C10K, is able to process
> 10Krsecs at C6K or so, is fully unit tested, and has complete documentation.

Which `ab` arguments are you using for "C10K"?

> Unfortunately for most people, it currently only supports PEP 444[2] (with
> modifications[3]) and, soon, my draft rewrite[4].
>
> Hell, creating a middleware-based adapter from WSGI 1 to my version of PEP
> 444 should be relatively straight-forward, considering the same
> incompatibility that bjoern takes exception to (the writer function returned
> by start_response, or rather, the lack of it).

I'm not yet intimate with the details of WSGI so I've learned a great
deal already just by hearing that a PEP 444 exists. To clarify, PEP
3333 is `WSGI 1.1` and PEP 444 is `web3`, correct? Would it be
accurate to say that your server supports `web3` rather than `WSGI 2`?

If anyone is interested in future benchmarks as I continue to
reintroduce complexity I'd be willing to repeat the above
periodically. And last but not least, the relevance of this post to
web.py is that my project will use web.py apps as "extensions" atop a
decentralized social framework. Think of it as retaining the
high-level features of web.py while constraining and optimizing the
lower-level features (in no small part by marrying the framework to
the most optimal server implementation).

--
Angelo Gladding
ang...@gladding.name

Branko Vukelić

unread,
Jan 1, 2011, 8:07:08 AM1/1/11
to we...@googlegroups.com
On Sat, Jan 1, 2011 at 11:05 AM, Angelo Gladding <ang...@gladding.name> wrote:
> "BJOERN IS SCREAMINGLY FAST AND ULTRA-LIGHTWEIGHT." He's right.

Thanks for this. And yes, please benchmark occasionally. I'm sure lots
of people are excited about bjoern and would like to know.

Alice Bevan–McGregor

unread,
Jan 1, 2011, 7:32:19 PM1/1/11
to we...@googlegroups.com
> To be fair he does say "Not HTTP/1.1 capable (yet)." Is it the
> pipelining that you'd miss?

That was added after a pull request I sent. ;)

HTTP/1.1 is far more than pipelining, though that does add
signifigantly to performance. Chunked encoding allows for far more
streamlined responses and true async responses. You don't have to
define content-length, so fully dynamic content is easier to produce.
(HTTP/1.0 does has pipelining using the Connection: keep-alive header,
but it -required- perfectly aligned content-lengths.)

> My application consists of a modified clone of web.py sans the bloat. I
> never thought I'd say such a thing of the anti-framework framework.
> Full disclosure: I've implemented my own `web.application`, borrowed
> heavily from `web.utils`/`web.http`, reimplemented most of `web.webapi`
> keeping with the style, and carried over the holy `web.template`.

I, too, forked web.py and removed signifigant portions of it:

https://github.com/GothAlice/webpy

After determining that cleaning up WebPy wasn't in the cards for me, I
wrote my own WSGI middleware-based microframework "from scratch"
(WebCore).

> Currently a middleware sanitizes the `environ` into a nested storage of
> request and response data parsing `accept*` headers, parsing
> `user-agent` against `browscap` to determine browser capabilities,
> geolocating according to `REMOTE_ADDR`, and checking if AJAX request
> using `X-Requested-With`. No cookie/session handling has been
> reintroduced yet.

The 'draft' branch of marrow.server.http performs unicode normalization
and normaization to native strings (Py2K byte str, Py3K unicode str),
moving some of the more mundane work out of the middleware layer.

> 2. If it is requested by a non-javascript-capable user-agent it
> precompiles the output from the four page handlers internally and
> flushes a complete page in one shot. jQuery is obviously /not/ fetched
> (in `ab` or `lynx`).

I'd recommend flushing the parts as they are compiled; Google does this
when returning search results: static header (filled in with current
search terms, easy to generate), search results, ads, then footer. It
makes the page 'apparently' fill faster.

> All `web.template`s are *precompiled* at application boot. Each of four
> content handler are produced via templates. The generator yields 4
> templates. That is 7 total. The landing utilizes these according to the
> rules above. The entire product is wrapped in a base template just
> before response. 9 template calls in total.

You might like to check out simplithe[1], a pure-Python templating
system (that uses Python itself as the interpreter, overloading []
notation) and is easy to return as a genetartive WSGI body (including
flush semantics).

> "BJOERN IS SCREAMINGLY FAST AND ULTRA-LIGHTWEIGHT." He's right.
>
> bjoern is up to 35 times faster than marrow and gevent w/ py-wsgi and
> 1.5 times faster than gevent w/ c-wsgi. c truly is a requirement for
> any kind of real speed.

That is, in fact, very impressive. Thank you for including Marrow. :)
What exactly did you mean by 4 marrow workers? Multi-processing?
Multi-processing on a single-core machine may impede performance, not
improve it. I'll also be adding multi-threading (which will be crap
performance under anything but Python 3.2+, but hey, it makes it more
complete).

Additionally, how much effort was it for you to make your application
compatible with the semantics of WSGI 2? (No start_response callable
and no writer returned by same.)

>> I'm mentioning this because in my tests m.s.http handles C10K, is able
>> to process
>> 10Krsecs at C6K or so, is fully unit tested, and has complete documentation.
>
> Which `ab` arguments are you using for "C10K"?

Here are the terminal windows: :)

https://gist.github.com/707936

The box is a 512MB (RAM) slice instance on SliceHost and is actively
running a MUSH (very inefficient use of CPU; it's constantly inducing a
small amount of load even with no active connections) and three WebCore
web apps.

At smaller concurrency (~6K) Marrow is able to process > 10K requests/second.

> I'm not yet intimate with the details of WSGI so I've learned a great
> deal already just by hearing that a PEP 444 exists. To clarify, PEP
> 3333 is `WSGI 1.1` and PEP 444 is `web3`, correct? Would it be
> accurate to say that your server supports `web3` rather than `WSGI 2`?

After some discussion on the Web-SIG mailing list, PEP 444 is now
"officially" WSGI 2, and PEP 3333 is WSGI 1.1.

> If anyone is interested in future benchmarks as I continue to
> reintroduce complexity I'd be willing to repeat the above
> periodically. And last but not least, the relevance of this post to
> web.py is that my project will use web.py apps as "extensions" atop a
> decentralized social framework. Think of it as retaining the
> high-level features of web.py while constraining and optimizing the
> lower-level features (in no small part by marrying the framework to
> the most optimal server implementation).

Sounds thuroughly impressive thus far! I can safely assume this is a
closed-source project?

Happy new-year!

- Alice.

[1] http://bit.ly/fxcFzG


Graham Dumpleton

unread,
Jan 1, 2011, 7:56:59 PM1/1/11
to we...@googlegroups.com


On Sunday, January 2, 2011 11:32:19 AM UTC+11, GothAlice wrote:

> I'm not yet intimate with the details of WSGI so I've learned a great
> deal already just by hearing that a PEP 444 exists. To clarify, PEP
> 3333 is `WSGI 1.1` and PEP 444 is `web3`, correct? Would it be
> accurate to say that your server supports `web3` rather than `WSGI 2`?

After some discussion on the Web-SIG mailing list, PEP 444 is now
"officially" WSGI 2, and PEP 3333 is WSGI 1.1.

Stop repeating that, it is not true despite what you reckon.

For a start PEP 3333 is still WSGI 1.0. Read the specification and it says that wsgi.version is still 1.0.

And as I have said before PEP 444 does not have have any official blessing as being WSGI 2.0. The only thing people were happy for you to do was to take PEP 444 and develop the idea further. The silence of the people on the Python WEB-SIG whose opinion matters does not constitute approval for you to commandeer the WSGI 2.0 moniker.

Graham



Alice Bevan–McGregor

unread,
Jan 1, 2011, 10:18:20 PM1/1/11
to we...@googlegroups.com
Graham,

You are correct; 3333 is v1.0.1.��A lack of sleep over the holidays is
slowing my brain.��;)

As to your only other contribution to this particular discussion, a
complaint, see your e-mail inbox.

- Alice.


Branko Vukelić

unread,
Jan 2, 2011, 5:13:05 AM1/2/11
to we...@googlegroups.com
On Sun, Jan 2, 2011 at 1:56 AM, Graham Dumpleton
<graham.d...@gmail.com> wrote:
> Python WEB-SIG whose opinion matters does not constitute approval for you to
> commandeer the WSGI 2.0 moniker.

Graham, look at the Web. Most of the "standards" on today's Web were
first implemented or specified by people who did NOT have the consent
of the people and institutions "whose opinion matters". WHATWG, most
notable example lately, started out just like web3, and their work is
now standard in HTML5.

In the end you can only standardize stuff that EXISTS, so "opinions"
don't really count. Work counts. Code counts.

Alice Bevan–McGregor

unread,
Jan 2, 2011, 6:10:29 AM1/2/11
to we...@googlegroups.com
>> Python WEB-SIG whose opinion matters does not constitute approval for you to
>> commandeer the WSGI 2.0 moniker.
>
> Graham, look at the Web. Most of the "standards" on today's Web were
> first implemented or specified by people who did NOT have the consent
> of the people and institutions "whose opinion matters". WHATWG, most
> notable example lately, started out just like web3, and their work is
> now standard in HTML5.
>
> In the end you can only standardize stuff that EXISTS, so "opinions"
> don't really count. Work counts. Code counts.

See also the thread where the choice of names for PEP 444 (web3 vs.
wsgi2) are discussed and WSGI 2 chosen by community discussion:

http://bit.ly/epyP6j

This might not be a large enough segment of the population to make
everyone happy, but it's enough for me to be able to move past it and
concentrate on actual technical issues.

- Alice.


Branko Vukelić

unread,
Jan 2, 2011, 8:06:42 AM1/2/11
to we...@googlegroups.com
On Sun, Jan 2, 2011 at 12:10 PM, Alice Bevan–McGregor
<al...@gothcandy.com> wrote:
> This might not be a large enough segment of the population to make everyone
> happy, but it's enough for me to be able to move past it and concentrate on
> actual technical issues.

While some are waiting for something to have an opinion on, some are
doing their thing. :)

Greg Milby

unread,
Jan 2, 2011, 8:40:34 AM1/2/11
to we...@googlegroups.com

Yea....*snap!*

::Sent from my Verizon HTC incredible gOS::

Reply all
Reply to author
Forward
0 new messages