Go is a much simpler language than D, so mastering Go will take you
much less time. You should be able to understand its features after
playing with it for a few days, and master them within a few weeks. I
couldn't say the same for D.
> However, I am confused which one is better. I would like to
> develop a server using one of these language.
Go was developed primarily for writing server software, so I think
you'll find it a useful tool. Give it a try!
> I understand GO is still understand development.
> I am looking for long term solution.
Go is still under development, but the language is stable. We're using
it at Google for production work, so we're committed to furthering its
development. :-)
Andrew
When it's used in non-confidential projects, yes.
Andrew
> if you like politics in D (d1.0, d2.0, phobos, tango), i've been using
> D for almost 5 yrs now porting my project from D+tango to Go and its a
> breeze.
Why are porting your projects from D+tango to Go ? What are problems
you faced while using D? Sorry I don't know history/politics of D.
> D1 is matured enough and production ready, very stable, i had a
> program (crawler with 50 threads) running for 2 days wiht no problem.
great!
> love the GC.
> D's spec is crazy multi-paradigm you name it they have it. No 64bit
> support for DMD (D's official compiler), LDC (llvm front end for D1)
> has 64bit support but still young.
Well, though Go is stable, it is still also under development. I don't
have any idea about 64-bit support development stage of both
languages.
> For me it's time to move forward, Go is the future. Go has everything
> i need, it's simple and it works.
Is D not the future? Which features are missing in D? Is there any
big vendor (like google) supporting D?
Sorry, I should I have asked some of above questions on D mailing list
or google it.
I've never used D, but I have looked at it a few times and if anything
it has too many features.
I've never used D, but from what I understand their user base is
fragmented by two different, incompatible "standard" libraries. Then
there's D 2.0, which is still in development but is supposed to
address the incompatibility issue.
> Well, though Go is stable, it is still also under development. I don't
> have any idea about 64-bit support development stage of both
> languages.
Go supports 64-bit development. You might even say that it's slightly
"more supported," since I believe many of the developers use 64-bit
machines. That's not to say that 32-bit support is lacking; it's not.
> Sorry, I should I have asked some of above questions on D mailing list
> or google it.
If you want more than just the biased opinion of this list, that's a
good idea. ;)
- Evan
IMOO. YMMV. TINRAR. LSMFT.
Jeff
--
Jeff Dickey Seven Sigma Software and Services
Email: jdi...@seven-sigma.com
Phone/SMS: +65 8333 4403
Website: http://www.seven-sigma.com
More info at: http://card.ly/jdickey
Michael T. Jones
Chief Technology Advocate, Google Inc.
1600 Amphitheatre Parkway, Mountain View, California 94043
Email: m...@google.com Mobile: 650-335-5765 Fax: 650-649-1938
Organizing the world's information to make it universally accessible and useful
I respectfully disagree. I have not found the use of the unsafe
package to be of any practical limitation when writing systems
programs. I've also used manual memory management, basically mmapping
in memory pages and handing out the memory with a custom allocator.
Having used both languages for a few years I think this sums things up
nicely: Go is a better C, D is a better C++ (and Java is an even
better C++)
Kai
I have liked D for a long while, but until there is a stable 64-bit
v2.0, with AMD64 installers, alongside the 386 v2.0, it is not really a
viable language.
I really like Go because of the process/channels model (I have been a
fan of CSP, and actor model, for over 25 years) and the presence of some
(not enough?) reflection capabilities.
The actor model is getting a lot of promotion via Erlang and Scala,
software transactional memory is getting a lot of promotion via Haskell
and Clojure. occam lives on as occam-pi in KRoC and JCSP (on which we
have constructed GroovyCSP), CSP is even getting airtime in Python (via
PythonCSP and PyCSP). The overall goal here, which to a great extent
strikes me as the goal of Go's goroutines and channels, is to
commoditize processor and turn it into a resource that is managed by the
runtime system just as memory is. Applications should not have to worry
about multicore directly, though they do have to worry about
communications distance between processors so as to avoid inappropriate
assumptions about communications time and safety.
The problem for me with Go and D is that both languages give all the
appearance of being backward looking -- though this may just be
conditioned by worrying about Posix compliance.
For me there are two questions:
1. What is the language for writing the next big operating system?
2. Do PGAS languages have the edge for writing applications in the
future?
Linux and Mach, like Windows, are now really in "maintenance mode" their
architectures and fundamental capabilities are fixed and unchangeable.
Future hardware architectures show all the signs of heading directly
towards multiple, heterogeneous, multicore, NUMA architectures with
bus-level clustering, local clustering and wide-area clustering (if not
more communications levels) and operating systems and programming
languages are not really ready to handle this. Languages like Chapel,
X10, even Fortress are doing lots of interesting research in PGAS but
because they market themselves in the HPC arena, they don't get taken as
seriously as they should by a wider audience of programmers. Certainly
though they are neither ready, nor possible never appropriate, for the
leap of being languages with which to write operating systems.
So the question really is whether D and Go are just interesting
sidelines in the interregnum between the era of network connected
uniprocessors and that of massively parallel, multi-level architecture
systems.
Go and its goroutines handle bus-level multicores quite nicely but then
the next level is network, there is no concept of layered clustering. C
++0x gives us futures and asynchronous function call to give similar,
albeit different, functionality -- and restrictions. I haven't
investigated D as much as I would like because of the 64-bit problem and
because the Threads library up to version 2.047 had errors that meant
threads code would not compile on Ubuntu Lucid or Debian Testing (I
guess I should download and try 2.048) and, to be honest, shared memory
multi-threading is not my idea of doing parallelism as an applications
programmer.
I suspect there will be a lot of prejudice against using a language with
garbage collection for writing a new operating system, which gives doubt
about whether Go will get used for that -- despite all the splendid
Plan-9 related work. So can D really step up and be a candidate? Or
will people just descend to the arguments "C is the only language
because it is the only one with a low enough viewpoint"?
--
Russel.
=============================================================================
Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel...@ekiga.net
41 Buckmaster Road m: +44 7770 465 077 xmpp: rus...@russel.org.uk
London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Package unsafe does go a long way there serving for all the evil pointer
tricks. And of course one can trivially call C and assembly code
compiled by gc/ga without using any external tools.
- Taru Karttunen
> At the same time we believe that low-
> level races are the worst kind of weakness in a type system.
That's an interesting perspective. I agree that low-level races are a
serious problem in modern programming. But phrasing in terms of a
problem in the type system seems odd to me. It seems to me that the
purpose of a type system is to ensure that the only appropriate
operations are applied to a particular piece of data. But a low-level
race is not a result of an inappropriate operation, it's a result of two
appropriate operations being applied without necessary safeguards.
Using the type system to avoid races is like using the type system to
prevent two different programs from writing to the same file at the same
time; it seems to me to be working at the wrong level.
> To define sharing without races, D has a coherent model for
> immutability (via the immutable qualifier) and a model for lock-free
> sharing (via the sharing qualifier). The system is simpler than that
> of race-free academic languages (i.e. Cormac Flanagan's or Boyapati/
> Rinard's work on Java extensions) and also less powerful, but we
> believe that message passing + immutability + limited sharing form a
> very compelling proposition.
I wasn't able to find the docs on the sharing qualifier (the docs
suggest that it is called shared, but I couldn't find any docs on that
either). Can you point me to a description of how it works?
> Go is simpler in that it doesn't formalize sharing, but also wants to
> be expressive, which exposes it to data races. If you create a channel
> of *int, my understanding (and please correct me if I'm wrong) only
> convention can help you from having two threads access the same
> integer. It only gets worse with pointers to more elaborate data
> types, and history has shown that convention is a poor mechanism for
> ensuring thread safety of any kind. For my money that's simply not an
> option.
You're quite right: Go does permit data races, and does currently rely
only on convention to avoid them. Go's advantage over C++, and it is a
significant advantage, is that the rules for valid sharing are fairly
simple, and the language makes it much easier to do valid sharing.
As far as I know it is an open question whether it is possible to have a
language that is simple, expressive and efficient, and is also able to
prevent or detect data races. One approach is the type system, but as
noted I think that is the wrong level and not simple. Another approach
is transactional memory, whether in hardware or software, but that is
not efficient and also checks operations at the wrong level--the byte
rather than the struct.
In Go it's natural to think of sending a pointer on a channel as a
transfer of ownership. That suggests a model in which the compiler
warns about cases where sending a pointer on a channel is followed by a
write through the pointer. Go's package system makes it possible to do
this reliably inter-procedurally, but of course it's possible to write
complex looping code, or code that changes behaviour based on user
input, that defies analysis.
It also suggests a "safe" compilation mode in which pointers are
annotated with ownership information. Sending a pointer on a channel
changes ownership. Writing through a pointer checks ownership. This is
problematic in that it only detects races which actually occur, not
races which could theoretically occur.
I really don't know how feasible these ideas are.
Ian
>> I wasn't able to find the docs on the sharing qualifier (the docs
>> suggest that it is called shared, but I couldn't find any docs on that
>> either). Can you point me to a description of how it works?
>
> As I mentioned, the entire chapter on concurrency in TDPL is available
> for free online:
>
> http://www.informit.com/articles/printerfriendly.aspx?p=1609144
Thanks. As you know, this approach is clearly different from the one in
Go, which is a transfer of ownership model, where the transfer of
ownership is enforced only by convention. And, of course, Go doesn't
use the type system, which follows Go's general guideline of keeping the
type system light weight.
>> You're quite right: Go does permit data races, and does currently rely
>> only on convention to avoid them. Go's advantage over C++, and it is a
>> significant advantage, is that the rules for valid sharing are fairly
>> simple, and the language makes it much easier to do valid sharing.
>
> How does Go simplify the rules for valid sharing? Far as I can tell it
> can only simplify if it renders undue aliasing undefined. Does Go have
> something equivalent to Java's volatile and C++0x's atomic?
The rules for valid sharing are encapsulated in the slogan "don't
communicate by sharing memory; instead, share memory by communicating."
That is, always use channels to communicate between goroutines. Always
ensure that a single goroutine owns shared data, and use an explicit
channel send to transfer ownership to a different goroutine.
This approach can be used in other languages also, of course; the
advantage I see in Go is that the language makes it simple and easy.
Ian
> I understand. So what we have now is:
>
> (1) Pass-by-bitblt through channels for value types (i.e. no
> indirections, which means dynamic arrays are unduly shared). There is
> no checking that a value type being passed actually does not have
> indirections.
>
> (2) Pass of ownership by unchecked convention for data with
> indirections.
>
> (3) Everything else is undefined.
>
> If that's true, parts 1 and 2 are of limited expressiveness but it's
> part 3 that's really problematic, and I'm not sure a putative
> programmer understands the implications. Essentially that means even
> lock-based programming relies on implementation-level vagaries because
> without a memory model the compiler and the processor are relatively
> free to hoist data around rather freely. We're back to the reorderings
> hell often showcased as motivators of Java's and C++0x's memory
> models.
Go does have a memory model: http://golang.org/doc/go_mem.html . The
memory model does define mutexes. They are stylistically discouraged
for use in most Go code, but they are available and well-defined when
required.
Ian
So counting the years elapsed seems a poor predictor of language adoption.
--
Scott Lawrence
Tried go - liked it because of simple design fast compile speed and ability to evovle into a first class alternative to c something that is needed c evelved in late 70s early 80s in mini and then micro processor environment something that addresses the advancement in cpu threading and messaging technologies but has a simple highly portable core with updated abikities to use them parallel technologies is really needed I think go is a valid attempt at that and takes hit at it a a nice spot with a nice blend of new fearues to reflect the kinds of programs that could written for newer gen hardware
Giuseppe
How fast is it advertised as being and what for you would count as
"lightning fast"?
Chris
--
Chris "allusive" Dollin
> If I understand you correctly, you are saying the following: If a type
> system happens to be sufficiently advanced to be able to detect this
> kind of error, then in your opinion such a type system is totally bad.
> The reason being that it is "working at the wrong level".
>
> Is THAT what you are saying?
Yes, although I wouldn't use exactly those words, that is more or less
what I am saying.
Ian
Well, for all programs I've written or looked at so far (nothing very
large, though), they've compiled in less time than it took my editor
(emacs, or kate) to start up. That's not bad.
--
Scott Lawrence
Such a type system exists. http://en.wikipedia.org/wiki/Linear_type_system
But any language with this type system would have to be purely functional.
Which has it's own set of disadvantages.
Everything is a trade off. This kind of checking would be really nice
to have, but
you can't do it without complicating the language.
> If I understand you correctly, you are saying the following: If a type
> system happens to be sufficiently advanced to be able to detect this
> kind of error, then in your opinion such a type system is totally bad.
> The reason being that it is "working at the wrong level".
>
> Is THAT what you are saying?
>
In a imperative language, like Go, the type system is entirely the
wrong level for this kind of behaviour.
- Jessta
--
=====================
http://jessta.id.au
Maybe it's not as fast as lightning, but for ten thousand lines of
code, half a second isn't too terrible. 50ms would be significantly
better, so future optimization by adding incremental compilation or
automatic caching could be cool. But for now, if all of the other
compilers are making the same mistakes, and 1.5 seconds isn't fast
enough for your ten thousand lines of code, you probably won't be
satisfied with interpreters either, since they run the code orders of
magnitude slower than compiled versions for the most part. JIT
compilers might be ok, but those should run into the same problems
too.
So the question comes back to: why stop using a language based on a
compiler that is faster than most other compilers? Do you just not
code at all?
The Go compiler is already incremental. The compilation unit is package.
--
@chickamade
On Aug 13, 3:00 pm, Ian Lance Taylor <i...@google.com> wrote:
> Andrei Alexandrescu <iro...@gmail.com> writes:
> >http://www.informit.com/articles/printerfriendly.aspx?p=1609144
>
> Thanks. As you know, this approach is clearly different from the one in
> Go, which is a transfer of ownership model, where the transfer of
> ownership is enforced only by convention. And, of course, Go doesn't
> use the type system, which follows Go's general guideline of keeping the
> type system light weight.
I understand. So what we have now is:
(1) Pass-by-bitblt through channels for value types (i.e. no
indirections, which means dynamic arrays are unduly shared). There is
no checking that a value type being passed actually does not have
indirections.
(2) Pass of ownership by unchecked convention for data with
indirections.
(3) Everything else is undefined.
If that's true, parts 1 and 2 are of limited expressiveness but it's
part 3 that's really problematic, and I'm not sure a putative
programmer understands the implications. Essentially that means even
lock-based programming relies on implementation-level vagaries because
without a memory model the compiler and the processor are relatively
free to hoist data around rather freely. We're back to the reorderings
hell often showcased as motivators of Java's and C++0x's memory
models.
> >> You're quite right: Go does permit data races, and does currently rely
> >> only on convention to avoid them. Go's advantage over C++, and it is a
> >> significant advantage, is that the rules for valid sharing are fairly
> >> simple, and the language makes it much easier to do valid sharing.
>
> > How does Go simplify the rules for valid sharing? Far as I can tell it
> > can only simplify if it renders undue aliasing undefined. Does Go have
> > something equivalent to Java's volatile and C++0x's atomic?
>
> The rules for valid sharing are encapsulated in the slogan "don't
> communicate by sharing memory; instead, share memory by communicating."
> That is, always use channels to communicate between goroutines. Always
> ensure that a single goroutine owns shared data, and use an explicit
> channel send to transfer ownership to a different goroutine.
>
> This approach can be used in other languages also, of course; the
> advantage I see in Go is that the language makes it simple and easy.
A memory model may as well be the perfect example where simplicity is
your enemy - not defining one is indeed simple, but has huge costs to
the user in one or more of correctness, portability, safety,
expressiveness, and efficiency.
So essentially (again I'm basing this on inference from what I've read
on golang.org and this discussion) one must only use the patterns (1)
and (2) in concurrent code. This severe limitation further erodes Go's
capabilities when compared to C, and I think it's reasonable that I
find Go's offering severely wanting for serious concurrent work.
Andrei
But D also gives you reference classes, genericity, function polymorphism, conditional compilation, design by contract assertions, compile-time meta programming, and many other features that are severely lacking in Go.
But from my personal experience, D is *at least* as easy to learn than Go, if not easier.
Just the fact that it doesn't break much with the familiar syntax of C#, Java, C++, etc helps a lot in making the transition.
And genericity and polymorphism are invaluable tools when optimizing code reuse without reducing execution speed.
For all the common parts with Go (functions, methods, reference classes, strings, arrays, slices, ranges, foreach, etc), honestly I don't know why you say it's simpler in Go.
Can you show me two examples of code side by side, and tell me "look how much simpler it's with Go's" ?
Because from what I read, I'm sometimes wondering if you really know that the type declarations in D are MUCH simpler than in C/C++.
And btw, this doesn't mean that just because there are genericity and polymorphism in D, that I must use them.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
I agree with you that the code must be developped using the KISS principle.
Look at the D code of my github account. All you will see is "baby-code".
Maybe, just maybe,The more complex stuff I use in D are genericy and polymorphism.
-j
-j
-j