[ANN] uasyncio - lean asyncio-like library

50 views
Skip to first unread message

Paul Sokolovsky

unread,
Jun 20, 2015, 7:31:12 PM6/20/15
to python...@googlegroups.com
Hello,

It was mentioned several times on the list already, and I would
like to finally make a formal announcement of it, also to mark "mostly
complete" status.

What:
uasyncio is asyncio-like library for MicroPython
(https://github.com/micropython/micropython) to accommodate writing
asyncio-style applications for constrained (both memory- and CPU-wise)
systems, down to microcontrollers, but also for small embedded Linux
systems (as well as for embedding into non-Python applications, or
producing small self-contained applications).

Where:
https://github.com/micropython/micropython-lib/tree/master/uasyncio.core
https://github.com/micropython/micropython-lib/tree/master/uasyncio

Structure:
uasyncio is structured as 2 components: uasyncio.core module which
implements generic priority queue based scheduler, and uasyncio package
proper, which adds async I/O support (currently with Linux support).

Functionality provided:
uasyncio implements subset of asyncio functionality:
1. It is built around concept of coroutines. Future's and Task's are
not part of its core API.
2. For I/O, high-level Stream API is supported, without low-level
Transport API.

Differences from asyncio:
1. The main difference is that uasyncio is strictly asynchronous
library, with writing (and other related operations) being as
asynchronous as read. More info is in http://bugs.python.org/issue24449
which also links to recent discussion on the list.

2. One potential difference is handling of Stream close operations.
This aspect isn't yet fully worked out (and is the reason why I didn't
post uasyncio announcement earlier). Intuitively, the issue is that
asyncio separate read and write Streams into separate objects. But
underlying socket object is a duplex read/write one, so closing it
should be done carefully. As uasyncio tries to avoid extra abstraction
layers, its handling of close operations is different to that of
asyncio. Any hints/discussion of this issue is welcome.

3. As an extension, it's possible to schedule a new coroutine for
execution by just yielding it. This was also discussed previously at
http://comments.gmane.org/gmane.comp.python.tulip/2430 .

To let uasyncio applications run with asyncio, there's a compatibility
module available at
https://github.com/micropython/micropython-lib/tree/master/cpython-uasyncio


Ecosystem:

An experimental web stack was prototyped on top of uasyncio:

1. picoweb web pico-framework: https://github.com/pfalcon/picoweb
2. utemplate tiny template module: https://github.com/pfalcon/utemplate
3. uorm tiny anti-ORM (current implementation supports Sqlite and
actually synchronous so far): https://github.com/pfalcon/uorm

Also, there's an async HTTP client
https://github.com/pfalcon/micropython-uaiohttpclient (roughly
following aiohttp API).

There's an example application for picoweb ported from Flask:
https://github.com/pfalcon/notes-pico


Achieved memory efficiency:
I once read an article which described coolness of Python coroutines,
in particular mentioning that a coroutine object takes a "mere" 1KB of
memory, so there can easily be tens of thousands of them, unlike
(preemptive) threads. In MicroPython, a small coroutine takes 32 bytes
of memory. But a minimal web application using picoweb still requires
50KB of heap to run. That's good enough for Linux systems, but somewhat
on bigger side for microcontrollers (for comparison, reference
microcontroller system for uPy has 128KB of heap; we would like to
support systems down to 16KB of memory).

Achieved performance:
Testcases are in
https://github.com/micropython/micropython-lib/tree/master/uasyncio/benchmark

uasyncio + MicroPython:
Document Length:        12000 bytes
Concurrency Level:      100
Complete requests:      10000
Requests per second:    10699.74 [#/sec] (mean)
Time per request:       9.346 [ms] (mean)

asyncio + cpython-uasyncio + CPython 3.4.2
Document Length:        12000 bytes
Concurrency Level:      100
Complete requests:      10000
Requests per second:    4876.02 [#/sec] (mean)
Time per request:       20.509 [ms] (mean)

Apache 2.4 + default static page
Document Length:        11510 bytes
Concurrency Level:      100
Complete requests:      10000
Requests per second:    12857.98 [#/sec] (mean)
Time per request:       7.777 [ms] (mean)


--
Best regards,
 Paul                     

Ludovic Gasc

unread,
Jun 21, 2015, 2:21:13 AM6/21/15
to Paul Sokolovsky, python-tulip

Hi Paul,

It's pretty interesting.


I've a question: do you test your daemons with wrk instead of ab?
Could you test with the values: wrk -t8 -c256 -d1m ?

I'm interested in to integrate uasyncio and picoweb in FrameworkBenchmarks:
https://www.techempower.com/benchmarks/

Regards.

Ludovic Gasc (GMLudo)
http://www.gmludo.eu/

Paul Sokolovsky

unread,
Jun 21, 2015, 5:09:38 AM6/21/15
to Ludovic Gasc, python-tulip
Hello Ludovic,

On Sun, 21 Jun 2015 08:20:52 +0200
Ludovic Gasc <gml...@gmail.com> wrote:

> Hi Paul,
>
> It's pretty interesting.

Thanks.

>
> I've a question: do you test your daemons with wrk instead of ab?
> Could you test with the values: wrk -t8 -c256 -d1m ?

I did not, for the same old reason - lack of time. uasyncio project
started more than a year ago, and for all this time, I didn't formally
announce it because I wasn't sure I did enough homework on it. So, I
decided to just "throw it over the wall" to invite wider community
review/criticism.

And to answer your question, I selected Apache Bench because it's a
well-known, easily accessible benchmarking tool. I used another tool,
Boom (https://github.com/tarekziade/boom), but not for performance,
rather correctness testing (ensuring that data received is actually
what's expected):

https://github.com/pfalcon/micropython-lib/blob/master/uasyncio/benchmark/test-boom-heavy.sh

I'll add testing with wrk to my queue, but it may take a while (because
there're bunch of other things I'm working on for MicroPython besides
uasyncio).

> I'm interested in to integrate uasyncio and picoweb in
> FrameworkBenchmarks: https://www.techempower.com/benchmarks/

I read about them, but have to admit I didn't have a chance to review
them in detail ;-(. But I re-did and included my tests comparing
uasyncio & asyncio exactly to invite 3rd-party independent testing, so
I'd appreciate if you could add it to your queue either.
MicroPython+uasyncio should be relatively easy to install, one area
where we may be lacking is documentation. But then to improve, we'd need
independent feedback on that either. So, if/when you get to it, feel
free to ask me any questions/give any feedback (by whatever way you
like, e.g. via https://github.com/micropython/micropython/issues/new).

Thanks,
Paul

>
> Regards.
>
> Ludovic Gasc (GMLudo)
> http://www.gmludo.eu/

--
Best regards,
Paul mailto:pmi...@gmail.com

Ludovic Gasc

unread,
Jun 21, 2015, 7:21:17 AM6/21/15
to Paul Sokolovsky, python-tulip
Hi Paul,

I've added MicroPython+uasyncio+picoweb in FrameworkBenchmarks todo-list.
I'll try to do that for round 12, it's too late for round 11.

However, to have a chance for good results, it will be necessary to support multi-worker pattern.
It means, create a worker like this for uasyncio: https://github.com/Eyepea/API-Hour/blob/master/api_hour/worker.py

BTW, an interesting article about Nginx architecture, it's more or less the same basic principles in API-Hour:
(I've taken the good ideas from Nginx and HAproxy internal architectures)

I'll try to fork my actual worker to support uasyncio, however, I don't know if API-Hour and Gunicorn will work with MicroPython, I'll see.
If you are present at EuroPython this year at Bilbao, it will be the occasion to discuss together.

BTW, if somebody else wants to help, be my guest.

Have a nice week-end.

--

Ludovic Gasc (GMLudo)
Reply all
Reply to author
Forward
0 new messages