I've been using Erlang and C++ to build a soft real-time system. As
the project has evolved we've needed to write more and more of the
code in C++ in order to achieve our latency requirements. But C++ is
not as performant as you might think until you start to write your own
allocators and cache aligning mallocs and datastructures. I've never
liked C++ so I decided to try OCaml and built a simple 100 line
program to build order books for Nasdaq. Turns out OCaml has really
competitive performance while being a really nice language.
However OCaml is broken! It does not provide any support for multicore
architectures, which by now is considered a bug! It doesn't even allow
me to load multiple runtimes into one C program.
Please fix OCaml! The first step would be to support multiple runtimes
running in the same process communicating using message queues.
Erik Rigtop
_______________________________________________
Caml-list mailing list. Subscription management:
http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
Archives: http://caml.inria.fr
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs
You might be interested by OCaml4Multicore:
http://www.algo-prog.info/ocmc/web/
It's still experimental, but its authors would love to have feedback.
Best regards,
--
Stéphane
You should take a look at:
http://jocaml.inria.fr/
Regards,
Sylvain Le Gall
Haskell is a functional language that has good performance that can use
multiple processors, but the learning curve is steeper and higher.
OCaml is a close relative of Standard ML, so there might be some
implementation of SML that you like. MLTon might allow multicore use,
but I'm not sure how mature it is. SML/NJ has a library or language
extension called Concurrent ML, but I think SML/NJ might not use
multiple processors.
Note that if you're not using a lot of threads, you can use Unix.fork to
do true multithreaded programming ocaml.
GHC and the Haskell language itself have serious performance problems.
> that can use multiple processors, but the learning curve is steeper and
> higher.
And Haskell lacks many of the features OCaml programmers take for granted.
> OCaml is a close relative of Standard ML, so there might be some
> implementation of SML that you like. MLTon might allow multicore use,
> but I'm not sure how mature it is. SML/NJ has a library or language
> extension called Concurrent ML, but I think SML/NJ might not use
> multiple processors.
MLton and SML/NJ are both multicore incapable. The PolyML implementation of
SML is multicore friendly but last time I looked (many years ago) it was 100x
slower than OCaml for floating point.
As long as you're looking at OCaml's close relatives with multicore support,
F# is your only viable option. Soon, HLVM will provide a cross-platform open
source solution. If you look further you will also find Scala and Clojure.
> Note that if you're not using a lot of threads, you can use Unix.fork to
> do true multithreaded programming ocaml.
We've discussed the problems with that before. Writing a parallel generic
quicksort seems to be a good test of a decent multicore capable language
implementation. Currently, F# is a *long* way ahead of everything open
source.
--
Dr Jon Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?e
> It's too bad that INRIA is not interested in fixing this bug.
Ask Santa Claus, you'll get it by Friday. Free shipping.
;-)
Martin
> The first step for OCaml would be to be able to run multiple
> communicating instances of the runtime bound to one core each in one
> process and have them communicate via lock free queues.
We've done some experiments in this direction at Jane Street. On Linux,
we've been able to get fast enough IPC channels for our purposes that
slamming things into the same memory space has not in the end been
necessary. (There is I agree some pain associated with running multiple
runtimes in the same process. If you're interested, contact me off-list and
I can try to get you some of the details of what we ran into.)
But have you tried using shared-memory segments for communicating between
different processes? You say the latencies are too high, but do you have
any measurements you could share? Have you tried queues using shared memory
segments, in particular? Inter-thread communication has latency as well,
and the performance issues depend on lots of things, OS and hardware
platform included. It would help in understanding the tradeoffs.
As we go to higher-and-higher numbers of cores, I suspect that
message-passing solutions are likely to scale better than shared memory, so
I'm not so sure that OCaml is on the wrong path here. I think that most of
the work that's needed is going to come in the form of libraries, with only
a little work in the compiler and the runtime. Given that, I think this is
an issue for the community to solve, not INRIA.
y
> It's too bad that INRIA is not interested in fixing this bug. No
> matter what people say I consider this a bug. Two cores is standard by
> now, I'm used to 8, next year 32 and so on. OCaml will only become
> more and more irrelevant. I hate to see that happening.
This is a perennial topic in this list. Without meaning to dwell too
long on old arguments, I simply ask you to consider the following:
- Do you really think a concurrent GC with shared memory will scale neatly
to those 32 cores?
- Will memory access remain homogeneous for all cores as soon as we get into
the dozens of cores?
- Have you considered that many Ocaml users prefer a GC that offers maximum
single core performance, because their application is parallelised via
multiple processes communicating via message passing? In this context,
your "bug" is actually a "feature".
Best regards,
Dario Teixeira
I'm also experimenting now with shared memory (shm) as fast IPC
mechanism. I've extended ocamlnet with a few functions that allow to
copy an ocaml value into a shm segment which is accessible as bigarray:
https://godirepo.camlcity.org/svn/lib-ocamlnet2/trunk/code/src/netsys/netsys_mem.mli
Look especially for init_string. (I've also to mention Ancient here
which inspired to this work.)
Having ocaml values in shm saves us from some marshalling costs which is
right now the biggest performance penalty when using multiple processes.
However, this causes some problems, and at some point modifications of
the ocaml runtime will be necessary:
- The polymorphic equality and hash primitives do not work anymore
for values in such shm segments (and that really hurts,
especially string comparison is broken)
- Given that the shm segment is set to read-only after being set up, it
is not possible to have pointers from shm to other memory regions.
This is good, as this would be very dangerous (GC may delete or move
values in the regular heap). However, the question arises when the
shm segment can be deleted. We would need help by the GC to identify
segments that are no longer referenced.
Without that, shm will be restricted to a role as low-level
inter-process buffers.
> As we go to higher-and-higher numbers of cores, I suspect that
> message-passing solutions are likely to scale better than shared
> memory, so I'm not so sure that OCaml is on the wrong path here. I
> think that most of the work that's needed is going to come in the form
> of libraries, with only a little work in the compiler and the
> runtime. Given that, I think this is an issue for the community to
> solve, not INRIA.
Well, message passing and shm do not exclude each other. We should
refine the terminology here: Actually, shm is just a basic mechanism
where several execution threads (including processes) can share memory.
What's often meant is, however, the role it plays for multi-threading,
i.e. shared mutable data structures. What's typical here is that several
threads write to the same memory regions. I don't know a good name for
naming that programming style - maybe multi-threading style shm is the
best.
I'm working on a local message passing queue that can be used for long
messages, based on shm, and where the messages can contain normal ocaml
values (although it is likely that these are copied to the normal heap
by the receiver, for the above mentioned reasons, but this is an
expensive copy). The whole point will be that the data marshalling costs
are minimized. So far I can already say, we will need some changes in
the runtime to make such a mechanism fast and safe.
Gerd
--
------------------------------------------------------------
Gerd Stolpmann, Bad Nauheimer Str.3, 64289 Darmstadt,Germany
ge...@gerd-stolpmann.de http://www.gerd-stolpmann.de
Phone: +49-6151-153855 Fax: +49-6151-997714
------------------------------------------------------------
As you mention order books and soft-realtime, I guess your main concern
are minimized latencies. Well, you need then a style of parallelism that
focuses on a certain processing path for a single data item, and where
the latency is minimized by using several cores. I think ocaml is
unsuited for this type of task, but please don't call ocaml "broken"
because of this. Other types of parallelism can be well supported,
especially when you can accept multi-processing, and when you focus on
larger processing paths and partitioned data sets.
Gerd
--
------------------------------------------------------------
Gerd Stolpmann, Bad Nauheimer Str.3, 64289 Darmstadt,Germany
ge...@gerd-stolpmann.de http://www.gerd-stolpmann.de
Phone: +49-6151-153855 Fax: +49-6151-997714
------------------------------------------------------------
_______________________________________________
Yes. F# is Windows only for all intents and purposes.
> I also believe the .NET GC is not good enough for real-time systems.
Although heavily allocating threads will experience pauses of up to several
seconds on .NET. However, threads that do not exceed their allocation quota
run almost completely concurrently with the GC, so their real-time
performance characteristics are good. This is the key to keeping UI threads
responsive.
Note that OCaml's GC has some problems. Specifically, the stack and arrays of
pointers in the heap are not traversed incrementally, incurring
arbitrarily-long stalls.
> Clojure running under real-time Java might be interesting.
Sounds like you have hard RT guarantees.
> It's too bad that INRIA is not interested in fixing this bug.
They spent something like a decade trying to write a decent concurrent GC and
pioneered the field.
> No matter what people say I consider this a bug.
A perf bug at best: it just means that OCaml is slower for many tasks.
> Two cores is standard by
> now, I'm used to 8, next year 32 and so on. OCaml will only become
> more and more irrelevant. I hate to see that happening.
Me too. The OCaml language will continue to kick ass for some time to come but
INRIA's implementation is no longer competitively performant for many tasks.
However, open source offerings are all quite dire, particularly stand-alone
ones.
> I think right now only Erlang got this right and they have a great
> library for developing enterprise applications too!
I couldn't disagree more. The *only* reason to work on parallelism is
performance and Erlang's performance sucks. I know Erlang scales better, but
it scales from poor absolute performance on 1 core to poor absolute
performance on n cores. Hence Erlang is hardly the defacto standard for HPC
on shared-memory supercomputers.
> The first step for OCaml would be to be able to run multiple
> communicating instances of the runtime bound to one core each in one
> process and have them communicate via lock free queues.
I think the first step is simply to replace OCaml's GC with a stop-the-world
parallel one like the one I wrote for HLVM. The problem is that OCaml's data
representation gives absolutely dire performance and kills scalability if you
do that. So you either need to optimize the GC for this or rewrite everything
from the ground up. OC4MC is doing the former. My HLVM project is doing the
latter. Suffice to say, there is no easy solution (although I prefer
mine ;-).
The following web page describes a commercial machine sold by Azul Systems
that has up to 16 54-core CPUs (=864 cores) and 768 GB of memory in a flat
SMP configuration:
http://www.azulsystems.com/products/compute_appliance.htm
As you can see, a GC with shared memory can already scale across dozens of
cores and memory access is no more heterogeneous than it was 20 years ago.
Also, note that homogeneous memory access is a red herring in this context
because it does not undermine the utility of a shared heap on a multicore.
> - Have you considered that many Ocaml users prefer a GC that offers maximum
> single core performance,
OCaml's GC is nowhere near offering maximum single core performance. Its
uniform data representation renders OCaml many times slower than its
competitors for many tasks. For example, filling a 10M float->float hash
table is over 18x slower with OCaml than with F#. FFT with a complex number
type is 5.5x slower with OCaml than F#. Fibonacci with floats is 3.3x slower
with OCaml than my own HLVM project (!).
> because their application is parallelised via multiple processes
> communicating via message passing?
A circular argument based upon the self-selected group of remaining OCaml
users. Today's OCaml users use OCaml despite its shortcomings. If you want to
see the impact of OCaml's multicore unfriendliness, consider why the OCaml
community has haemorrhaged 50% of its users in only 2 years.
> In this context, your "bug" is actually a "feature".
I'm not even sure you can substantiate that in the very specific context of
distributed parallel theorem provers because other languages are so much more
efficient at handling common abstractions like parametric polymorphism. Got
any benchmarks?
--
Dr Jon Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?e
_______________________________________________
The benchmarks they mention can all easily be parallelized - that stuff
you can also do with multi-processing. The interesting thing would be an
inherent parallel algorithm where the same memory region is accessed by
multiple threads. Or at least a numeric program (your examples seem to
be mostly from that area).
> > - Have you considered that many Ocaml users prefer a GC that offers maximum
> > single core performance,
>
> OCaml's GC is nowhere near offering maximum single core performance. Its
> uniform data representation renders OCaml many times slower than its
> competitors for many tasks. For example, filling a 10M float->float hash
> table is over 18x slower with OCaml than with F#. FFT with a complex number
> type is 5.5x slower with OCaml than F#. Fibonacci with floats is 3.3x slower
> with OCaml than my own HLVM project (!).
Sure, but these micro benchmarks are first seldom correct, and do not
really count for real-world programs.
For example, an important parameter of such benchmarks is the frequency
the GC runs. Ocaml runs the GC very often - good for latencies, but bad
for micro benchmarks because other runtimes simply delay the GC until
some limits are exceeded, so these other runtimes often haven't run the
GC even once in the short period of time the benchmark runs.
It is simply a fact that the ocaml developers had some preferences. E.g.
allocating and freeing short-living values is extremely fast (often
<10ns). This is very good when you do symbolic computations, or have
lots of small strings, but ignorable for numeric stuff, or for programs
where the lifetime of allocated memory is bound to server sessions. The
minor GC is very fast, but, as you observe, the uniform representation
has costs elsewhere.
> > because their application is parallelised via multiple processes
> > communicating via message passing?
>
> A circular argument based upon the self-selected group of remaining OCaml
> users. Today's OCaml users use OCaml despite its shortcomings. If you want to
> see the impact of OCaml's multicore unfriendliness, consider why the OCaml
> community has haemorrhaged 50% of its users in only 2 years.
Don't see that. That's just speculation - maybe some win32 ocaml users
switched to F#, but there are for sure also other reasons than multicore
support, e.g. GUIs and better Windows integration. Btw, where do you get
your numbers from?
There are many, many users for whom multicore is just a useless hype.
Either the algorithms are inherently difficult to parallelize (and this
is vast majority), or are that easy (like all client/server stuff) that
multi-processing is sufficient. You can consider multicore as a
marketing trick of the chip industry to let the ordinary desktop user
pay for a feature that is mostly interesting for datacenters.
Gerd
--
------------------------------------------------------------
Gerd Stolpmann, Bad Nauheimer Str.3, 64289 Darmstadt,Germany
ge...@gerd-stolpmann.de http://www.gerd-stolpmann.de
Phone: +49-6151-153855 Fax: +49-6151-997714
------------------------------------------------------------
_______________________________________________
Only if the result is small, otherwise you spend all of your time
deserializing it. With a shared heap, you just return the resulting value by
reference.
> The interesting thing would be an
> inherent parallel algorithm where the same memory region is accessed by
> multiple threads.
Concurrent hash tables are a big thing for Azul:
"Scales well up to 768 CPUs" -
http://www.youtube.com/watch?v=WYXgtXWejRM
This blog entry describes performance on 750 cores:
http://blogs.azulsystems.com/cliff/2007/03/a_nonblocking_h.html
> Or at least a numeric program (your examples seem to be mostly from that
> area).
Yes. You can look at matrix operations or linear algebra (QR decomposition)
but also things like quicksort and graphics.
Would be interesting to compare symbolic performance as well though.
> > > - Have you considered that many Ocaml users prefer a GC that offers
> > > maximum single core performance,
> >
> > OCaml's GC is nowhere near offering maximum single core performance. Its
> > uniform data representation renders OCaml many times slower than its
> > competitors for many tasks. For example, filling a 10M float->float hash
> > table is over 18x slower with OCaml than with F#. FFT with a complex
> > number type is 5.5x slower with OCaml than F#. Fibonacci with floats is
> > 3.3x slower with OCaml than my own HLVM project (!).
>
> Sure, but these micro benchmarks are first seldom correct, and do not
> really count for real-world programs.
>
> For example, an important parameter of such benchmarks is the frequency
> the GC runs. Ocaml runs the GC very often - good for latencies, but bad
> for micro benchmarks because other runtimes simply delay the GC until
> some limits are exceeded, so these other runtimes often haven't run the
> GC even once in the short period of time the benchmark runs.
You're missing the point: every example I gave shouldn't be doing any GC at
all and doesn't in F# but spends a lot of time in the GC in OCaml just
because of unnecessary boxing. The mutator also takes longer because boxing
damages locality.
> It is simply a fact that the ocaml developers had some preferences. E.g.
> allocating and freeing short-living values is extremely fast (often
> <10ns). This is very good when you do symbolic computations, or have
> lots of small strings, but ignorable for numeric stuff, or for programs
> where the lifetime of allocated memory is bound to server sessions. The
> minor GC is very fast, but, as you observe, the uniform representation
> has costs elsewhere.
Yes. That's why I think the best way forward is to develop HLVM.
> > > because their application is parallelised via multiple processes
> > > communicating via message passing?
> >
> > A circular argument based upon the self-selected group of remaining OCaml
> > users. Today's OCaml users use OCaml despite its shortcomings. If you
> > want to see the impact of OCaml's multicore unfriendliness, consider why
> > the OCaml community has haemorrhaged 50% of its users in only 2 years.
>
> Don't see that. That's just speculation - maybe some win32 ocaml users
> switched to F#,
I wasn't a win32 user. :-)
> but there are for sure also other reasons than multicore
> support, e.g. GUIs and better Windows integration. Btw, where do you get
> your numbers from?
Traffic here:
2007: 5814
2008: 4051
2009: 3071
http://groups.google.com/group/fa.caml/about
Or searches for OCaml on Google:
http://www.google.com/trends?q=ocaml%2Cclojure%2Cf%23
The number of OCaml jobs has crashed as well:
http://www.itjobswatch.co.uk/jobs/uk/ocaml.do
And, of course, what our customers say.
> There are many, many users for whom multicore is just a useless hype.
In 2005, the OCaml community was composed largely of performance junkies who
came here because OCaml produced excellent performance from succinct and
readable code on benchmark after benchmark. More people were buying OFS than
were using Coq. I don't believe for a second that many of OCaml's former
users thought multicore was just useless hype.
> Either the algorithms are inherently difficult to parallelize (and this
> is vast majority),
I have had great success parallelizing code.
> or are that easy (like all client/server stuff) that multi-processing is
> sufficient.
There are certainly applications where multicore is not beneficial.
> You can consider multicore as a marketing trick of the chip
> industry to let the ordinary desktop user pay for a feature that is mostly
> interesting for datacenters.
Ordinary desktop users have been paying top dollar for parallel computers in
the form of GPUs for some time now. The use of GPUs for more general
programming has been a really hot topic for years and just became viable.
Even games consoles have multicores. ARM are making quadcores for your phone
and netbook!
If I can get HLVM to make parallel OCaml-style programming easy, I think a lot
of people would love it.
--
Dr Jon Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?e
_______________________________________________
On Sun, Dec 20, 2009 at 11:30 PM, Jon Harrop <j...@ffconsultancy.com> wrote:
> Or searches for OCaml on Google:
>
> http://www.google.com/trends?q=ocaml%2Cclojure%2Cf%23
I'm not sure if OCaml is becoming more or less popular, but I find the
evidence for a decline less than convincing. It is true that there is less
traffic on this list, but it's hard to know how to interpret this. I
haven't gotten the sense that Python is in decline, but traffic on
comp.lang.python has also been declining since 2005.
Google Trends is also a confusing metric. For example, it suggests that
Java, Python and C++ have been declining for years:
http://www.google.com/trends?q=java&ctab=0&geo=all&date=all&sort=0
http://www.google.com/trends?q=C%2B%2B&ctab=0&geo=all&date=all&sort=0
http://www.google.com/trends?q=Python&ctab=0&geo=all&date=all&sort=0
My suspicion is that Google Trends gives numbers normalized to the overall
search world, and so things that aren't growing fast look smaller as search
volume in general grows. Obviously an up-and-coming language like clojure
still shows an upswing, as one would expect from an up-and-coming language.
The number of OCaml jobs has crashed as well:
>
> http://www.itjobswatch.co.uk/jobs/uk/ocaml.do
I thought this was a silly metric when it spiked up, and continue to think
it's a silly metric today. There are a tiny number of legitimate ocaml jobs
(and the same is true for Haskell, Clojure, Scala, SML, etc.) and the
ups-and-down in this tiny sample are not statistically significant. Again:
don't pick OCaml because of the large number of OCaml jobs out there. There
are very very few, both now and in '05.
Reliable metrics on a community like this are hard to come by, but things
seem quite vibrant to me. There are always new OCaml startups popping into
existence, new libraries being written, and new things coming out of INRIA
(for example, the arrival of modules as first-class values, which is
expected in OCaml 3.12). From my point of view, there is still no platform
out there I would rather be using.
y
That's because I don't have much time to post here nowaydays. I'm
sure if Jon followed my example, we would have a parallel GC for OCaml
by the end of the year.
Regards,
Markus
--
Markus Mottl http://www.ocaml.info markus...@gmail.com
HLVM already has a parallel GC. :-)
--
Dr Jon Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?e
_______________________________________________
> We've discussed the problems with that before. Writing a parallel generic
> quicksort seems to be a good test of a decent multicore capable language
> implementation. Currently, F# is a *long* way ahead of everything open
> source.
How do you implement it?
1) divide at the top and then let each core sort its smaller array
Say you have 2^n cores then the first n splits in merge sort would
first sort the values into the 2 regions and then start a thread for
each region (start a new one, do the other part in this thread). After
n splits you would switch to a uni-core quicksort.
For this you need to split well so each core ends up with a roughly
equal sized chunk.
2) factory/worker approach
Each core runs a factory/worker thread waiting on a job queue. You
start by dumping the full array into the job queue. Some random core
picks it up, splits it into 2 regions and dumps one into the job
queue. If the job gets too small (PAGE_SIZE? cache line size? total
size / cores^2?) the factory/worker switches to normal uni-core
quicksort and sorts the whole chunk.
The job queue should probably be priority based so larger chunks are
sorted before smaller.
Here the quality of each split is not so important. If a chunk is
smaller, and therefore faster, the core just picks up the next job in
the queue. But you need more syncronization between the cores for the
job queue. On the other hand you aren't limited to 2^n cores. Any
number will do.
MfG
Goswin
That seems dark...
--
Architecte Informatique chez Blueline/Gulfsat:
Administration Systeme, Recherche & Developpement
+261 34 29 155 34 / +261 33 11 207 36
i have a large project written in C++, for which i am planing to write add-ons and tools in ocaml, e.g. different tools to analyse my code (dependency stuff), an interpreter for a script-language i plan to include, etc, etc. form my time at the uni i remembered that ocaml allows to compile libraries which can be included in c/c++ program, and i know people who use it extensively in other projects. therefore, i decided to give ocaml a try. i like functional programming, and my first steps with ocaml are very promising.
following this discussion, i am not so sure anymore, if ocaml is a good decision. may be i got this discussion wrong, but if ocaml is dying out, i might have to look for another functional programming language to use with my project.
please dont take this email as an offence. i am just curious. at this point, i can still easily look for an alternative to ocaml, so its best to ask now.
regards,
keyan
Please don't believe Jon's propaganda. He has just very specific needs
(high performance computing on desktops), and generalizes them in the
way "it's not perfect for me anymore, so it's bad anyway". He has been
doing that for years now, not seeing that he really harms the way ocaml
is seen by newcomers.
The examples you mention are good matches for using ocaml - symbolic
programming with lots of terms and trees. That's the stuff ocaml was
originally developed for, and it delivers excellence performance.
Also, ocaml is still backed by INRIA, and there is still a large
community, including a growing number of industrial users.
Gerd
--
------------------------------------------------------------
Gerd Stolpmann, Bad Nauheimer Str.3, 64289 Darmstadt,Germany
ge...@gerd-stolpmann.de http://www.gerd-stolpmann.de
Phone: +49-6151-153855 Fax: +49-6151-997714
------------------------------------------------------------
_______________________________________________
Functional programming languages will never become mainstream. There is
a thread on haskell-mailing-list from time to time.
What language you chose should depend always on your (your team) skills,
tools and tasks.
-Philip
Quite alive.
http://packages.debian.org/changelogs/pool/main/o/ocaml/current/changelog
(Look for "new upstream release" to check how often new upstream have
been released and made part of Debian-based distributions. Fedora
people have similar stories to tell.)
Cheers.
--
Stefano Zacchiroli -o- PhD in Computer Science \ PostDoc @ Univ. Paris 7
zack@{upsilon.cc,pps.jussieu.fr,debian.org} -<>- http://upsilon.cc/zack/
Dietro un grande uomo c'� ..| . |. Et ne m'en veux pas si je te tutoie
sempre uno zaino ...........| ..: |.... Je dis tu � tous ceux que j'aime
I've seen some interesting parallel programming projects and language
extensions using ocaml. I suppose ocaml could benefit from a
parallelizing compiler & standardized explicit parallelism constructs,
and be a serious contender for the multicore "market". I personally
started out with Haskell with regards to contemporary high-level
languages, and then switched to ocaml because of performance and
sanity. I think I also love the higher-order modules =) I want to
rewrite my stock prediction program in ocaml nowadays. In Haskell, it
was a pain to work on large files. Good thing I lost the code in a hard
drive crash. The way I see it, ocaml has adequate performance, and is
excellent for algorithmic work. I have this half-finished project that features
ocaml implementation of some algorithms. You should see them, they
are almost identical to pseudo-code. I should move that project to ocamlforge.
Cheers,
--
Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
http://groups.yahoo.com/group/ai-philosophy
http://myspace.com/arizanesil http://myspace.com/malfunct
> i dont want to go into a which-programming-language-is-best-for-what
> discussion (as this will never end), but at this point i wanted to know
> if ocaml is still alive, i.e. if you can still easily download and
> install it on a variety of OS, and if it will be supported in the future.
The fact that the compiler's source code is (a) available, and (b)
straightforward enough for mere mortals to understand should give you
some assurances that Ocaml can never die by fiat. Moreover, there's
a vibrant community around it, both in industry and in the open-source
world. (Ocaml support in Debian and Fedora is top-notch, for example).
Last but not least, Ocaml plays a central role in multiple INRIA
projects, which means its creators have all the reason to continue
maintaining it and improving it for the foreseeable future (and there's
some interesting goodies in the upcoming 3.12 release, for example).
Though I am grateful and acknowledge Jon Harrop's help in the beginner's
list, you should take his prognostications with a grain of salt. Every
now and again he proclaims that "Ocaml is doomed! We're all gonna die!".
It has almost become a comedy catchphrase of sorts in this list...
So yes, do choose Ocaml for your project. You won't regret it.
Best regards,
Dario Teixeira
Dario Teixeira wrote:
> Last but not least, Ocaml plays a central role in multiple INRIA
> projects, which means its creators have all the reason to continue
> maintaining it and improving it for the foreseeable future (and there's
> some interesting goodies in the upcoming 3.12 release, for example).
>
Actually, this gives these projects an incentive to insure that Ocaml
survives, which gives an incentive for some 'maintenance engineers' to
be kept on-staff to insure that Ocaml does not bit-rot. This gives only
quite partial incentive to a team of researchers (the creators of Ocaml)
to do maintenance (as that is usually not research, thus not the kind of
work of interest to researchers). And entropy is a real problem --
Ocaml is now quite mature, which means that radical changes are well
nigh impossible; this is a serious disincentive for researchers. End of
quibble.
Personally, I would really really want to see a 4.00 release which
really warrants that name. The 3.XX line can be maintained for a few
more years while people switch, in the same way gcc did this.
In any case, I have nevertheless voted with my time and effort: I have 1
large project being implemented in Ocaml, 3 medium ones in metaocaml,
although I must admit that I have some 'research' code in Haskell (and
in Maple, but that's another story).
Jacques
I am beginning using Ocsigen, for a growing web project:
Is multicore support useless for scaling on Ocsigen?
X-post to Ocsigen ML.
--
Architecte Informatique chez Blueline/Gulfsat:
Administration Systeme, Recherche & Developpement
+261 34 29 155 34 / +261 33 11 207 36
_______________________________________________
> Ok, so for the beginner I am (must I ask on the beginners ML?): is
> multicore support just useless or not?
That *entirely* depends on what you want to do. If, for example, you
have to do a large calculation that is limited by memory and not by CPU,
or, if you have an application that is trivially parallelized anyway,
multicore support won't make much of a difference. There are
(many) other applications, however, where it does matter quite a lot.
Actually, the biggest effect of multicore architectures I see is to shift
the emphasis from raw CPU power to memory bandwidth.
--
Thomas