--
For other discussions, see https://groups.google.com/a/dartlang.org/
For HOWTO questions, visit http://stackoverflow.com/tags/dart
To file a bug report or feature request, go to http://www.dartbug.com/new
To unsubscribe from this group and stop receiving emails from it, send an email to misc+uns...@dartlang.org.
Concurrency and multi-threaded programming have a reputation for difficulty. We believe this is due partly to complex designs such as pthreads and partly to overemphasis on low-level details such as mutexes, condition variables, and memory barriers. Higher-level interfaces enable much simpler code, even if there are still mutexes and such under the covers.
One of the most successful models for providing high-level linguistic support for concurrency comes from Hoare's Communicating Sequential Processes, or CSP. Occam and Erlang are two well known languages that stem from CSP. Go's concurrency primitives derive from a different part of the family tree whose main contribution is the powerful notion of channels as first class objects. Experience with several earlier languages has shown that the CSP model fits well into a procedural language framework.
Goroutines are part of making concurrency easy to use. The idea, which has been around for a while, is to multiplex independently executing functions—coroutines—onto a set of threads. When a coroutine blocks, such as by calling a blocking system call, the run-time automatically moves other coroutines on the same operating system thread to a different, runnable thread so they won't be blocked. The programmer sees none of this, which is the point. The result, which we call goroutines, can be very cheap: they have little overhead beyond the memory for the stack, which is just a few kilobytes.
To make the stacks small, Go's run-time uses resizable, bounded stacks. A newly minted goroutine is given a few kilobytes, which is almost always enough. When it isn't, the run-time grows (and shrinks) the memory for storing the stack automatically, allowing many goroutines to live in a modest amount of memory. The CPU overhead averages about three cheap instructions per function call. It is practical to create hundreds of thousands of goroutines in the same address space. If goroutines were just threads, system resources would run out at a much smaller number.
similar to the model used in Erlang and go.
The plan is to experiment with alternative concurrency mechanisms with the goal of gaining insights that may (or may not) feed into Dart proper at some point. We feel that without the experiments it is hard to conclude that the current asynchronous model is the best we can do.We are also interested in getting a better understanding on how to deploy and run applications (partly?) written in the Dart language on various mobile platforms. Some of those platforms do not allow just-in-time compilation, so in context of Fletch we're experimenting with fast interpretation instead.
Great, thanks for the info...
--
Correct link:
https://github.com/dart-lang/fletch/blob/master/docs/scheduler.md
Fletch bytecode with dart2js, what is this about?
--
Fletch bytecode with dart2js, what is this about?
It's important to note that we're not blocking native threads for this (it's still based on epoll / kqueue underneath) so we believe we can avoid sacrificing scalability even though the programming model is nice and simple.
package main
import (
"fmt"
)
func main() {
channel := make(chan int)
go func() {
for i := 0; i < 10; i++ {
channel <- i
}
close(channel)
}()
for {
if value, ok := <-channel; ok {
fmt.Println(value)
} else {
break
}
}
}
library main;
import "dart:async";
Future main() async {
var channel = new StreamController();
channel.stream.listen((int i) {
print(i);
});
new Future(() {
for (int i = 0; i < 10; i++) {
channel.add(i);
}
channel.close();
});
await channel.done;}
K.
--