Thank you very much for your reply.
I understand most people prefer the thread way to program concurrency and it is absolutely ok for a programming language to provide a threaded programming style even in an event loop context.
But I can tell a little more on my personal opinion on concurrent programming.
In process algebra, we can define two ways to compose two processes into one (just as addition or multiplication does to numbers in math): one is sequential composition, the other is concurrent composition.
In programming languages using traditional blocking-io style, the semicolon (';'), concatenating statements, is acturally a sequential composition. And if a concurrent composition is required, a new language primitive must be invented, such as threads, fibers or coroutines, and goroutines in golang.
However, as Robin Milner pointed out in his famous turing award lecture, if we must choose only one mathematical operation on process composition, either sequential or concurrent, which one should be chosen? The answer is the concurrent one. Milner said that a sequential composition could be represented by the concurrent composition of two processes, where one starts right after the other finishes.
This is exactly what the node callback does. An asynchronous function and it's callback function as the argument are acturally a sequential composition of two processes (in form of functions).
The semicolon between two "asynchronous" functions invoked synchronously one after the other, is acturally a symbol for concurrent composition.
This is the beauty of the node callback. There is no need to invent another language primitive to deal with concurrency. Or we may say, the event model is inherently immune to concurrency.
Of couse callback has its weakness in coding style. Acturally, callback itself is a **degenerated** event emitter, which just emit 'finish' event once. Writing an anonymous callback is much simpler than implementing a full-fledged class object that inherits from the Event Emitter. It also uses much less resources to run. But, event emitter is not just powerful, it is almighty for all concurrent problems in event-loop based execution context.
Supposing a group of processes starts simultaneously, and if one of them fails, all others should be cancelled. This is not a uncommon case in server side programming. In this case, if all processes are implemented as a full-fledged event emitter, all have the abort method, the clean-up in error handling is a charm. If things like this are composed to bigger and bigger processes repeatedly. I personally find that using Emitter with rith methods, such as abort, pause, resume, progress, etc, are the most simple, understandable and controllable way to programming concurrency. All actions are just jobs. Small jobs can be composed into larger one. Jobs can be pending, or started, paused and resumed, and aborted. In this way, the fine-grained behaviors, such as scheduling, laziness, queueing or error handling can be achieved in an extremely easy way.
Since event emitters (jobs) are just objects, they can be composed and manipulated in all level of granularity. There is no different between a small job or a large job. But in threaded style programming. Functions are functions, threads are threads, coroutines are coroutines. They can't be converted to each other freely.
----
The concurrent programming can be described by just two orthogonal abstract concepts: processes, and inter-process communication.
In thread model, programming single process is easier and looks synchronous. But inter-process communication is a nightmire in complex scenario and looks asynchronous (possibly asynchronous inherently if true thread are used).
In event model, all processes should be executed asynchronously (for node, the ideal is that all intensive computation should also be executed in libuv based thread poll), and the main process, the event loop itself, is acturally the synchronous communication between all processes.
This is the duality. Simply to say, each model just pick one to be synchronous and easy, and leave the other as asynchronous, to be, hard to deal with (thread model) or inefficient (event model, doing io asynchronous is OK but doing all computation asynchronously is very inefficent.)
----
So, I do hope Dart can provide the node style asynchronous io someday. After all, Dart is a language, which should provide the mechanism, not merely the policy. Futures or Promises or async/await, they are the solution to the coding style problem of sequential process composition. But they are definitely not an end-all, for-all solution to concurrent process composition. You cannot cancel, pause or resume a Future/Promise. A thread or thread-like things cannot be interrupted in nature. The only thing you can do is to poll some external state variables all the way, which is really disgusting and error prone.
The event emitter is the end-all, for-all solution, at least in the model level. And the callback is just a degenerated, simpler case. That is why I almost never write async/await in top level composition. I do use them in the most fine grained level of behavior. Then they are converted to callback or encapsulated into event emitters for higher level composition.
----
And since I am so happy with Node, why did I come here?
For JavaScript, you can happily write and run it if memory usage is not a concern. But if the memory is a concern, for example, if the arm/linux board has merely 16M~64M ram, Node is not practical any more for a medium-sized application.
I love JavaScript. And I have read some tutorials online about Dart. Glad to know that almost all the good things in JavaScript, such as the first-class functions, closures, are preserved in Dart. Futhurmore, Dart program can be compiled and has a type system, which usually means the memory could be used more efficiently.
IMHO, among the competition between programming languages, a language is popular not because it is designed better than others. it is because at certain moment, there is a new requirement to program something and all other existing language cannot do it very well. Dart is simple and light-weight. Could be run efficiently with limited memory. So I do think in the incomming IoT era, there will be a huge success for Dart, where python and JavaScript cannot do efficiently with limited system resources.
But please, besides future-based
dart.io, provides node-style callback-based aio to programmers and let us to choose which one to use.
If you never heard server-side programmer complains this, I suppose that in most case of server programming, they rely on the database for data persistence and the dynamic, horizontal expansion of the virtual hosts to deal with the short of computing resources. In IoT, however, this is not the case. Most device has very limited computation power and io capabilities. When the incomming tasks flooding, the best thing we can do is to provide the **partial** usability of the service by scheduling, queueing, and rejecting or aborting unimportant jobs. This is the lesson we learned for years of programming an extremely low-end home NAS device. No databas, no way to expand the computing resources. This is where the node model shines. Everything can be scheduled in a simple and flexible way. I do think this is crucial for most IoT devices.