Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Need some light on using Ada or not

106 views
Skip to first unread message

Luis P. Mendes

unread,
Feb 18, 2011, 5:52:38 PM2/18/11
to
Hi,

I have two projects to work, one of them in the data mining field and
another regarding xml parsing.
I've been learning C++ (from a Python, Pascal, VB background), due to it
being fast (sure it depends on the implementation) and because it has a
lot of libraries.

But I find C++ a very complex language and Ada appeals to me specially
for its overall safety. Or maybe also because I don't like to go with
majorities... :-)

I have some questions, however, that I'd like to be answered:
1. If Ada is more type safe and restricted than C++, how can it be
significantly slower?
Please see: http://shootout.alioth.debian.org/u64q/benchmark.php?
test=all&lang=gnat
where for some tests, Ada is 2x, 3x, 4x and 5x slower.
For the data mining field as I want to implement, speed is essential.
I'll code in Linux and use gcc as a compiler/linker.

2. In C++ I can use lots of libraries. I'm thinking on data visualization
libraries, for example http://www.graphviz.org/Gallery/undirected/
softmaint.html.
I've read that Ada can use some C bindings. Can I use any C library?
Some? Is it easy?
I don't want to drop C++ for Ada to build a project that later has to be
rewritten in C++ because of lack of libraries.

3. Is there any kind of fast xml stream parser library? No DOM or SAX,
just to read attributes.


Luis

Georg Bauhaus

unread,
Feb 18, 2011, 6:58:24 PM2/18/11
to
On 2/18/11 11:52 PM, Luis P. Mendes wrote:

> I have some questions, however, that I'd like to be answered:
> 1. If Ada is more type safe and restricted than C++, how can it be
> significantly slower?
> Please see: http://shootout.alioth.debian.org/u64q/benchmark.php?
> test=all&lang=gnat
> where for some tests, Ada is 2x, 3x, 4x and 5x slower.

I happen to know the details of the 3x case (and mention
for the record that there are 6 more Ada programs running at ±1x).
The 3x program, regex-dna, has two parts: string search and
string replacement.

The string search part is among the fastest of the shootout
programs. The replacement part is slow (even though there
seem to have been improvements in the GNAT library recently,
I think). The reason is that the pattern matching library used
depends on Ada.Strings.Unbounded. Replacements change the string.
That differs from typical regex libraries, such as Python's,
which constructs new strings under the hood. Consequently,
the replacement part of regex-dna accounts for the 3x.

(The test rules say that we should use exactly one library,
If we could use GNAT's Spitbol patterns for searching and
one of the other pattern matching packages for replacement,
the Ada program would be closer to the top of the list.
The second part of the program would be shorter, too.)

I'm saying this in order put the speed argument in perspective.
In fact, the shootout has some Ada programs that demonstrate
how to get high speed executables from nothing but standard Ada.
No tricks, no compiler built-ins.

> 3. Is there any kind of fast xml stream parser library? No DOM or SAX,
> just to read attributes.

Raincode once published their XML parsing library which, IIRC, was for
constructing efficient "low level" XML parsers.

Edward Fish

unread,
Feb 18, 2011, 7:20:55 PM2/18/11
to
On Feb 18, 2:52 pm, "Luis P. Mendes" <luislupe...@gmailXXX.com> wrote:
> Hi,
>
> I have two projects to work, one of them in the data mining field and
> another regarding xml parsing.
> I've been learning C++ (from a Python, Pascal, VB background), due to it
> being fast (sure it depends on the implementation) and because it has a
> lot of libraries.
>
> But I find C++ a very complex language and Ada appeals to me specially
> for its overall safety.  Or maybe also because I don't like to go with
> majorities... :-)
>
> I have some questions, however, that I'd like to be answered:
> 1. If Ada is more type safe and restricted than C++, how can it be
> significantly slower?
> Please see:http://shootout.alioth.debian.org/u64q/benchmark.php?
> test=all&lang=gnat
> where for some tests, Ada is 2x, 3x, 4x and 5x slower.
> For the data mining field as I want to implement, speed is essential.
> I'll code in Linux and use gcc as a compiler/linker.
>
> 2. In C++ I can use lots of libraries. I'm thinking on data visualization
> libraries, for examplehttp://www.graphviz.org/Gallery/undirected/

> softmaint.html.
> I've read that Ada can use some C bindings.  Can I use any C library?
> Some? Is it easy?
> I don't want to drop C++ for Ada to build a project that later has to be
> rewritten in C++ because of lack of libraries.
>
> 3. Is there any kind of fast xml stream parser library? No DOM or SAX,
> just to read attributes.
>
> Luis

I'm going to answer in reverse-order.
#3 - There is XMLAda; I mention it only because I've heard of it. I
haven't had a need for XML, much less a FAST XML parser. But consider
that you might not NEED a full-blown XML parser if what you're doing
is relatively simple: you could instead have your type override the
'Read & 'Write attributes in the proper XML format and use Streams.

#2 - This is highly dependent on you. Some people are perfectly happy
with a light binding, in which case it's EASY; some people want a
THICK binding in which case it's a bit harder because you have to
design an interface which essentially a) hides the C/C++ imports &
calls, and b) is in the "Ada Style." To take OpenGL for example
instead of a function taking a glenum you would subtype it out so that
it takes ONLY the valid values.

#1 - Speed is HEAVILY dependent on the implementation. Consider, for a
moment, sorting. A bubble-sort and a quick-sort are exactly the same
in terms of Input/Output [on an array of discrete types], but the
speeds are radically different. As Georg mentioned that shootout
program used the Unbounded version of strings, and that makes
manipulation thereof rather slow... it could likely have been done
with normal strings a bit faster but with a bit more effort and
"dancing around" the fixed nature of string-lengths.

Vadim Godunko

unread,
Feb 19, 2011, 3:43:06 AM2/19/11
to
On Feb 19, 1:52 am, "Luis P. Mendes" <luislupe...@gmailXXX.com> wrote:
>
> 3. Is there any kind of fast xml stream parser library? No DOM or SAX,
> just to read attributes.
>
Another option to process XML streams is XML module of Matreshka:

http://adaforge.qtada.com/cgi-bin/tracker.fcgi/matreshka

it also includes optimized implementation of unbounded form of string.

Brian Drummond

unread,
Feb 19, 2011, 8:07:58 AM2/19/11
to
On 18 Feb 2011 22:52:38 GMT, "Luis P. Mendes" <luisl...@gmailXXX.com> wrote:

>Hi,
>
>I have two projects to work, one of them in the data mining field and
>another regarding xml parsing.
>I've been learning C++ (from a Python, Pascal, VB background), due to it
>being fast (sure it depends on the implementation) and because it has a
>lot of libraries.
>
>But I find C++ a very complex language and Ada appeals to me specially
>for its overall safety. Or maybe also because I don't like to go with
>majorities... :-)

One other Ada advantage is not often mentioned, but will strongly appeal once
you start using it...

I believe the reason it is not often mentioned is because Ada texts tend to
pre-date current trends in software development (agile programming, refactoring
and so on).

I find Ada very very easy to refactor, and quite safe too, because the kinds of
bugs that you accidentally introduce while refactoring are the kinds of bugs
that the compiler is good at catching; visibility rules, etc.

Moving code into packages or into local procedures ... just works, in ways that
C++ would rarely allow, usually failing in the most obscure ways in my
experience. When I change the structure of a program, I detest having to change
every other . into -> for example.

Another example : moving an array from local variable (the stack) to the heap
(after I increased its size and hit a stack size limit) meant I had to refer to
it through an access type, instead of directly. Instead of "my_array(I,J,K)" I
was faced with changing every reference to "my_array_ptr.all(I,J,K)" ...
However...
my_array : big_array_type renames my_array_ptr.all;
and I was done.
(Apologies to the regulars; I've told that story before.
But it was one of the incidents that sold me on Ada's ease of use)

>I have some questions, however, that I'd like to be answered:
>1. If Ada is more type safe and restricted than C++, how can it be
>significantly slower?
>Please see: http://shootout.alioth.debian.org/u64q/benchmark.php?
>test=all&lang=gnat
>where for some tests, Ada is 2x, 3x, 4x and 5x slower.

Two possible reasons; both come down to the relative number of people developing
for both languages.

(1) the C++ compiler may be more highly developed. Granted that much of gcc is
common to both, that may not be the real issue.

(2) The C++ shootout code examples may be more highly developed.
Since the majority of those examples show approximate parity while a
(substantial) minority favour C++, I would suspect this. To justify it properly
would take a case-by-case analysis. But in a quick look at the 4x slower
results, one stands out like a sore thumb...

binary-trees (last 4 columns are CPU usage)
Ada 2005 GNAT 37.45 37.47 198,132 955 0% 0% 100% 0%
C++ GNU g++ 26.99 8.40 358,832 892 87% 61% 99% 76%

Single processor, the Ada version is just 38% slower, with half the memory
footprint; probably a damn good compromise between footprint and speed.
However the C++ version exploits 4 cores. Given Ada's support for concurrent
tasks, that suggests some room for improvement...

You have some of the inside story on other examples from other posters.

>2. In C++ I can use lots of libraries. I'm thinking on data visualization
>libraries, for example http://www.graphviz.org/Gallery/undirected/
>softmaint.html.
>I've read that Ada can use some C bindings. Can I use any C library?
>Some? Is it easy?

Ada can easily bind to C libraries, it's standard and well documented.

However there already exist bindings to some graphics libraries and data
visualisation tools - look at GTKAda and QTAda for GUI and some graphics
bindings, and PLPlot for data visualisation. One of these may work for you.

C++ bindings are also possible, but with some work and (currently) some
limitations.
A GCC recent enough to support "-f-dump-ada-spec" will auto-generate an Ada spec
from C++ sources, which will save a lot of the work. (Adacore "libre" 2010 has
it; the FSF GCC 4.5.0 has not. Anyone know if it made it into 4.5.1 or 4.6.0?)

I would currently treat that binding as a starting point rather than a complete
solution. For example, it (libre "GPL2010" from Adacore) has problems with
templates. (Especially when your well-proven C++ template library still has bugs
that Ada generics would have caught first time through the compiler!)

One example (independent of template bugs!): I instantiated a template to create
a new class, in the C++ library. But my C++ code never actually created an
instance of the class; I only did that through the Ada binding. As a result, the
C++ compiler never created a constructor; and the link failed with "missing
constructor" (redux from a screenful of error message). I worked around this by
writing a C++ function which instantiated one of everything I needed...

- Brian

Simon Wright

unread,
Feb 19, 2011, 9:17:18 AM2/19/11
to
Brian Drummond <brian_d...@btconnect.com> writes:

> Another example : moving an array from local variable (the stack) to
> the heap (after I increased its size and hit a stack size limit) meant
> I had to refer to it through an access type, instead of
> directly. Instead of "my_array(I,J,K)" I was faced with changing every
> reference to "my_array_ptr.all(I,J,K)" ...

Really? I think that "my_array_ptr(I,J,K)" would have worked ..

procedure Arrays is
subtype Array_Bound is Natural range 0 .. 100;
type Array_Type is array
(Array_Bound, Array_Bound, Array_Bound) of Integer;
type Array_Pointer is access Array_Type;
My_Array : constant Array_Pointer := new Array_Type;
begin
My_Array (1, 2, 3) := 42;
end Arrays;

Simon Wright

unread,
Feb 19, 2011, 9:25:17 AM2/19/11
to
Georg Bauhaus <rm-host...@maps.futureapps.de> writes:

> On 2/18/11 11:52 PM, Luis P. Mendes wrote:
>
>> I have some questions, however, that I'd like to be answered:
>> 1. If Ada is more type safe and restricted than C++, how can it be
>> significantly slower?
>> Please see: http://shootout.alioth.debian.org/u64q/benchmark.php?
>> test=all&lang=gnat
>> where for some tests, Ada is 2x, 3x, 4x and 5x slower.
>
> I happen to know the details of the 3x case (and mention
> for the record that there are 6 more Ada programs running at ±1x).
> The 3x program, regex-dna, has two parts: string search and
> string replacement.

At the moment, regex-dna is timed at 33.01 for GNAT, 5.76 for C++ GNU
G++.

However - the G++ code relies on a library which is not supplied as part
of the compiler, re2. I found it on Google Code and downloaded it. It
appeared to build OK, but when building the benchpark against it there
was an error (OK, could have been because I'm using GCC 4.6.0
experimental):

diff -r 160e31271912 re2/stringpiece.h
--- a/re2/stringpiece.h Tue Feb 01 11:09:33 2011 -0500
+++ b/re2/stringpiece.h Sat Feb 19 14:22:52 2011 +0000
@@ -117,7 +117,7 @@
typedef const char& reference;
typedef const char& const_reference;
typedef size_t size_type;
- typedef ptrdiff_t difference_type;
+ typedef std::ptrdiff_t difference_type;
static const size_type npos;
typedef const char* const_iterator;
typedef const char* iterator;


Admittedly, after that it built & ran OK...

Georg Bauhaus

unread,
Feb 19, 2011, 9:36:45 AM2/19/11
to
On 2/19/11 2:07 PM, Brian Drummond wrote:
> On 18 Feb 2011 22:52:38 GMT, "Luis P. Mendes"<luisl...@gmailXXX.com> wrote:

>> I have some questions, however, that I'd like to be answered:
>> 1. If Ada is more type safe and restricted than C++, how can it be
>> significantly slower?
>> Please see: http://shootout.alioth.debian.org/u64q/benchmark.php?
>> test=all&lang=gnat
>> where for some tests, Ada is 2x, 3x, 4x and 5x slower.
>
> Two possible reasons; both come down to the relative number of people developing
> for both languages.

Some reasons are pretty simple: when the results are due
to specialized libraries used, rather than a consequence
of the properties of the respective language (built in
storage management features in this test). (Also interesting:
The Java versions vary widely, and some are fast even though
the solutions uses plain Java.) The leading C and C++
entries win by making these choices:

C #includes <apr_pool.h>, that is, it exercises the Apache memory pool,
not what is available with plain C.

C++ #includes a similar thing from the Boost libraries.

This is allowed by the test's rules and authorities, but it
may make some conclude that relative speed differences
are due to the language choice when they aren't.

> Single processor, the Ada version is just 38% slower, with half the memory
> footprint; probably a damn good compromise between footprint and speed.
> However the C++ version exploits 4 cores. Given Ada's support for concurrent
> tasks, that suggests some room for improvement...

I vaguely remember that it has been tried before, but so far there
is no better solution.

Brian Drummond

unread,
Feb 19, 2011, 1:02:58 PM2/19/11
to
On Sat, 19 Feb 2011 14:17:18 +0000, Simon Wright <si...@pushface.org> wrote:

>Brian Drummond <brian_d...@btconnect.com> writes:
>
>> Another example : moving an array from local variable (the stack) to
>> the heap (after I increased its size and hit a stack size limit) meant
>> I had to refer to it through an access type, instead of
>> directly. Instead of "my_array(I,J,K)" I was faced with changing every
>> reference to "my_array_ptr.all(I,J,K)" ...
>
>Really? I think that "my_array_ptr(I,J,K)" would have worked ..

I believe you are correct sir!
Which is even simpler than the rename...

- Brian

Bill Findlay

unread,
Feb 19, 2011, 1:07:49 PM2/19/11
to


On 19/02/2011 18:02, in article 5b10m6ts00bu7shko...@4ax.com,
"Brian Drummond" <brian_d...@btconnect.com> wrote:

And if you call the pointer "my_array" you don't need to change anything.

--
Bill Findlay
with blueyonder.co.uk;
use surname & forename;


Brian Drummond

unread,
Feb 19, 2011, 1:25:44 PM2/19/11
to
On Sat, 19 Feb 2011 15:36:45 +0100, Georg Bauhaus
<rm-host...@maps.futureapps.de> wrote:

>On 2/19/11 2:07 PM, Brian Drummond wrote:
>> On 18 Feb 2011 22:52:38 GMT, "Luis P. Mendes"<luisl...@gmailXXX.com> wrote:
>
>>> I have some questions, however, that I'd like to be answered:
>>> 1. If Ada is more type safe and restricted than C++, how can it be
>>> significantly slower?

>> Two possible reasons; both come down to the relative number of people developing


>> for both languages.
>
>Some reasons are pretty simple: when the results are due
>to specialized libraries used, rather than a consequence
>of the properties of the respective language (built in
>storage management features in this test). (Also interesting:
>The Java versions vary widely, and some are fast even though
>the solutions uses plain Java.) The leading C and C++
>entries win by making these choices:
>
>C #includes <apr_pool.h>, that is, it exercises the Apache memory pool,
>not what is available with plain C.
>
>C++ #includes a similar thing from the Boost libraries.
>
>This is allowed by the test's rules and authorities,

even though you are not allowed to supply your own pool.
Possibly harsh, but I can see the logic behind it.

>> Single processor, the Ada version is just 38% slower, with half the memory
>> footprint; probably a damn good compromise between footprint and speed.
>> However the C++ version exploits 4 cores. Given Ada's support for concurrent
>> tasks, that suggests some room for improvement...

Actually there is more to this ... I was erroneously confusing the "cpu time"
for the 4-processor code with the single-processor result which is actually
presented on another page.

4-core: http://shootout.alioth.debian.org/u64q/performance.php?test=binarytrees
C++: 26.99s CPU time, 8.40s elapsed.
Ada: 37.45s CPU time, 37.47s elapsed.

1-core: http://shootout.alioth.debian.org/u64/performance.php?test=binarytrees
C++: 17.29s CPU time, 17.31s elapsed.
Ada: 37.42s CPU time, 37.44s elapsed.

So
(a) even single-core, there is 2:1 between them.
(b) performance scales poorly; I would guess the multicore version is thrashing
the cache, and memory bandwidth is the limitation.

>I vaguely remember that it has been tried before, but so far there
>is no better solution.

That makes me feel better...

I have broken down and finally started to learn Ada's tasking. So far I have
gone from 56s (CPU) 56s (elapsed) with one task, to 120s (CPU), 64s(elapsed)
with multiple tasks (on a smallish 2-core laptop)...

Disappointing.

(If anybody's interested, I am using 9 tasks, one per "Depth" value in the main
while loop. The basic pattern is shown on p.488 of Barnes, Ada 2005. Iterate
over the tasks, providing their start values. Iterate again, collecting results.
I think the next step will be to collect results asynchronously when each task
finishes, using a protected object.).

But I believe you are correct that a better storage pool is probably the answer.

Maybe we need an Ada binding to Boost? :-)

- Brian

Luis P. Mendes

unread,
Feb 19, 2011, 7:13:47 PM2/19/11
to


I'd like to thank everyone that answered.
For me, that have learnt (or trying to learn) some programming languages
by myself, with no graduation in this area, C++ really sound cryptic.
I always like to learn by example, and although Ada must be very well
documented, it can be not obvious for me to solve some issues.

My main doubt is the amount of aid I can get if I embark in the Ada ship.
But I surely will give it a try.
I've read a lot about the language and seen some books.
Is it or is it not advisable for beginner like me to lear Ada from a 95
book? I was thinking in the Ada 95: The Craft of Object-Oriented
Programming book.
Any other recommended book? The Programming in Ada 2005 seems expensive
just to try the language.

Free resources from the Internet don't seem to include much howtos or
guides of the 2005 specification.
Or did I miss something?


Luis

Luis P. Mendes

unread,
Feb 19, 2011, 7:20:56 PM2/19/11
to
Sat, 19 Feb 2011 13:07:58 +0000, Brian Drummond escreveu:

> Ada can easily bind to C libraries, it's standard and well documented.
>
> However there already exist bindings to some graphics libraries and data
> visualisation tools - look at GTKAda and QTAda for GUI and some graphics
> bindings, and PLPlot for data visualisation. One of these may work for
> you.
>
> C++ bindings are also possible, but with some work and (currently) some
> limitations.
> A GCC recent enough to support "-f-dump-ada-spec" will auto-generate an
> Ada spec from C++ sources, which will save a lot of the work. (Adacore
> "libre" 2010 has it; the FSF GCC 4.5.0 has not. Anyone know if it made
> it into 4.5.1 or 4.6.0?)
>
> I would currently treat that binding as a starting point rather than a
> complete solution. For example, it (libre "GPL2010" from Adacore) has
> problems with templates. (Especially when your well-proven C++ template
> library still has bugs that Ada generics would have caught first time
> through the compiler!)

Would you mind giving me an example?
Please consider the following C++ code:
===== header file
$ cat aleatorio.h
#ifndef GUARD_aleatorio_h
#define GUARD_aleatorio_h

#include <unistd.h>
#include <ctime>
#include <cstdlib>

void iniciarSemente();
double gerarAleatorio(int a, int b);
int gerarAleatorioInteiro(int a, int b);
int arredondar(double res);

#endif

===== source file
$ cat aleatorio.cpp
#include <unistd.h>
#include <ctime>
#include <cstdlib>
#include "aleatorio.h"
#include <math.h>

using std::srand;
using std::rand;

void iniciarSemente() {
srand(time(NULL));
//srand(10); //gerar sempre a mesma semente para comparacoes
}

double gerarAleatorio(int a, int b) {
return (b-a) * ( (double) rand()/RAND_MAX) + a;
}

int arredondar(double res) {
return (res > 0.0) ? floor(res + 0.5) : ceil(res - 0.5);
}

int gerarAleatorioInteiro(int a, int b) {
// verificar colocação do (int), parêntesis
float res;
res = gerarAleatorio(a, b);
return arredondar(res);
}

=====

From Ada, how can I use these h and cpp files to call, for example,
gerarAleatorioInteiro(0,10)?


Luis

Marc A. Criley

unread,
Feb 19, 2011, 8:36:29 PM2/19/11
to
On 02/19/2011 06:13 PM, Luis P. Mendes wrote:

> My main doubt is the amount of aid I can get if I embark in the Ada ship.

Aside from books and the Ada wiki sites, there's plenty of personal help
available as well.

As you see here, comp.lang.ada is a rich and ready source of information
and assistance.

Other active venues include StackOverflow (www.stackoverflow.com), just
post a question and tag it with "Ada". Several SO members watch for those.

And while it's not so much a Q&A site, the Ada sub-reddit
(www.reddit.com/r/ada) has lots of member-submitted links about
interesting Ada articles and stuff. (And Ada questions are accepted--I
should know, I'm the moderator :-)

> But I surely will give it a try.

Good luck, and don't hesitate to ask when you need some assistance.

Marc A. Criley

mockturtle

unread,
Feb 20, 2011, 4:59:18 AM2/20/11
to
Hi, just my fast 2 cents...

On Sunday, February 20, 2011 1:13:47 AM UTC+1, Luis P. Mendes wrote:

> My main doubt is the amount of aid I can get if I embark in the Ada ship.

As someone else pointed out, c.l.a. is a good place where to get help. In my opinion, this is a very good newsgroup: very high light/heat ratio and good answers.

> But I surely will give it a try.
> I've read a lot about the language and seen some books.
> Is it or is it not advisable for beginner like me to lear Ada from a 95
> book? I was thinking in the Ada 95: The Craft of Object-Oriented
> Programming book.
> Any other recommended book? The Programming in Ada 2005 seems expensive
> just to try the language.
>
> Free resources from the Internet don't seem to include much howtos or
> guides of the 2005 specification.
> Or did I miss something?
>

I do not know if someone already pointed you at the wikibook:

http://en.wikibooks.org/wiki/Ada_Programming

It is quite good, even if some parts are not complete.

If my history can help, I learnt Ada by myself, starting reading tutorial and free material found around the net and using it in mini (and not-so-mini) projects of mine. After getting some experience, you can use as reference the well-known Reference Manual (RM for friends :-)

http://www.adaic.org/resources/add_content/standards/05rm/html/RM-TTL.html

Brian Drummond

unread,
Feb 20, 2011, 5:37:15 AM2/20/11
to
On 20 Feb 2011 00:13:47 GMT, "Luis P. Mendes" <luisl...@gmailXXX.com> wrote:

>Fri, 18 Feb 2011 16:20:55 -0800, Edward Fish escreveu:

>I'd like to thank everyone that answered.

>My main doubt is the amount of aid I can get if I embark in the Ada ship.


>But I surely will give it a try.
>I've read a lot about the language and seen some books.
>Is it or is it not advisable for beginner like me to lear Ada from a 95
>book? I was thinking in the Ada 95: The Craft of Object-Oriented
>Programming book.

>Any other recommended book? The Programming in Ada 2005 seems expensive
>just to try the language.

But very worthwhile, once you have decided to try Ada.

>Free resources from the Internet don't seem to include much howtos or
>guides of the 2005 specification.
>Or did I miss something?

For free; the Ada Wikibook is very good in most respects, and does have some
coverage of Ada 2005.
http://en.wikibooks.org/wiki/Ada_Programming

- Brian

Brian Drummond

unread,
Feb 20, 2011, 5:42:52 AM2/20/11
to

Except when I need to pass the array to the procedures which do the work.

I could rewrite the procedures to accept pointers ... then the package spec, and
the older apps which used the package. But the renamed array worked for all
these.

(Arguably it's not portable, because another Ada compiler might try passing the
whole array by copy... or is that outlawed by the LRM?)

- Brian

Brian Drummond

unread,
Feb 20, 2011, 5:50:06 AM2/20/11
to
On 20 Feb 2011 00:20:56 GMT, "Luis P. Mendes" <luisl...@gmailXXX.com> wrote:

>Sat, 19 Feb 2011 13:07:58 +0000, Brian Drummond escreveu:
>
>Would you mind giving me an example?
>Please consider the following C++ code:
>===== header file
> $ cat aleatorio.h

...


>=====
>
>From Ada, how can I use these h and cpp files to call, for example,
>gerarAleatorioInteiro(0,10)?
>

Start with
gcc -c -fdump-ada-spec aleatorio.h
(assuming a suitably recent gcc!)

Or see
http://gcc.gnu.org/onlinedocs/gnat_ugn_unw/Generating-Ada-Bindings-for-C-and-C_002b_002b-headers.html#Generating-Ada-Bindings-for-C-and-C_002b_002b-headers

No promises but I'll try to make an example with your code today.

- Brian

Ludovic Brenta

unread,
Feb 20, 2011, 6:08:25 AM2/20/11
to
Luis P. Mendes writes on comp.lang.ada:

> My main doubt is the amount of aid I can get if I embark in the Ada
> ship.

The answer is: plenty. As other have said, comp.lang.ada is friendly
and crawling with language experts.

> But I surely will give it a try.
> I've read a lot about the language and seen some books.
> Is it or is it not advisable for beginner like me to lear Ada from a 95
> book? I was thinking in the Ada 95: The Craft of Object-Oriented
> Programming book.

It is an excellent book for a beginner. I learned Ada 95 with it and
enjoyed the read even though, by then, I was already fluent in several
other languages. I recommend this book highly.

> Free resources from the Internet don't seem to include much howtos or
> guides of the 2005 specification. Or did I miss something?

The Ada Programmin wikibook, as others have said.

--
Ludovic Brenta.

Brian Drummond

unread,
Feb 20, 2011, 9:34:35 AM2/20/11
to
On Sat, 19 Feb 2011 18:25:44 +0000, Brian Drummond
<brian_d...@btconnect.com> wrote:

>On Sat, 19 Feb 2011 15:36:45 +0100, Georg Bauhaus
><rm-host...@maps.futureapps.de> wrote:
>
>>On 2/19/11 2:07 PM, Brian Drummond wrote:
>>> On 18 Feb 2011 22:52:38 GMT, "Luis P. Mendes"<luisl...@gmailXXX.com> wrote:
>>
>>>> I have some questions, however, that I'd like to be answered:
>>>> 1. If Ada is more type safe and restricted than C++, how can it be
>>>> significantly slower?
>>> Two possible reasons; both come down to the relative number of people developing
>>> for both languages.

[using tasking for the binary_trees benchmark, which currently uses a single
task...]


>>I vaguely remember that it has been tried before, but so far there
>>is no better solution.

>I have broken down and finally started to learn Ada's tasking. So far I have


>gone from 56s (CPU) 56s (elapsed) with one task, to 120s (CPU), 64s(elapsed)
>with multiple tasks (on a smallish 2-core laptop)...
>
>Disappointing.
>
>(If anybody's interested, I am using 9 tasks, one per "Depth" value in the main
>while loop.

Further odd results. I re-structured the tasking so that I could modify the
number of tasks, from 1, 2, 4, etc. The "CPU" utilisation remains virtually
identical, at 2 minutes; the elapsed time is 2 minutes with 1 task, or 1 minute
with 2 or more (on a 2-core laptop. I'll report on a 4-core later).

Moving from GCC4.5.0 (FSF) to Adacore Libre 2010 makes no significant
difference. (OpenSuse 11.3, 64-bit, 2-core laptop)

Doubling the CPU time with a single task is suspicious, so I tried the following
experiment : source code below - main program only. For the rest, and the
original version, see
http://shootout.alioth.debian.org/u64q/performance.php?test=binarytrees

I removed virtually the entire body of the program into a single task.
This change alone doubles the "CPU" time. There appears to be a 100% penalty
associated simply with running the original program from within a second task.

Anyone see what I'm doing wrong?
Any pitfalls to using tasking that I may have missed?

I suspect storage [de]allocation since that's under stress in this test, and
other benchmarks (e.g. Mandelbrot) don't see this penalty.
Should the task have its own separate storage pool, to avoid difficulties
synchronising with the main pool (even though the main program no longer uses
it?


----------------------------------------------------------------
-- BinaryTrees experimental version
--
-- Ada 95 (GNAT)
--
-- Contributed by Jim Rogers
-- Tasking experiment: Brian Drummond
----------------------------------------------------------------
with Treenodes; use Treenodes;
with Ada.Text_Io; use Ada.Text_Io;
with Ada.Integer_Text_Io; use Ada.Integer_Text_Io;
with Ada.Command_Line; use Ada.Command_Line;
with Ada.Characters.Latin_1; use Ada.Characters.Latin_1;

procedure Binarytrees_tasktest is

N : Natural := 1;

task the_work is
entry Start(Count :in Natural);
entry Complete;
end the_work;

task body the_work is
Min_Depth : constant Positive := 4;
Stretch_Tree : TreeNode;
Long_Lived_Tree : TreeNode;
Short_Lived_Tree_1 : TreeNode;
Short_Lived_Tree_2 : TreeNode;
Max_Depth : Positive;
Stretch_Depth : Positive;
Check : Integer;
Sum : Integer;
Depth : Natural;
Iterations : Positive;
begin

accept Start(Count :in Natural) do
N := Count;
end Start;

Max_Depth := Positive'Max(Min_Depth + 2, N);
Stretch_Depth := Max_Depth + 1;
Stretch_Tree := Bottom_Up_Tree(0, Stretch_Depth);
Item_Check(Stretch_Tree, Check);
Put("stretch tree of depth ");
Put(Item => Stretch_Depth, Width => 1);
Put(Ht & " check: ");
Put(Item => Check, Width => 1);
New_Line;

Long_Lived_Tree := Bottom_Up_Tree(0, Max_Depth);

Depth := Min_Depth;
while Depth <= Max_Depth loop
Iterations := 2**(Max_Depth - Depth + Min_Depth);
Check := 0;
for I in 1..Iterations loop
Short_Lived_Tree_1 := Bottom_Up_Tree(Item => I, Depth => Depth);
Short_Lived_Tree_2 := Bottom_Up_Tree(Item =>-I, Depth => Depth);
Item_Check(Short_Lived_Tree_1, Sum);
Check := check + Sum;
Item_Check(Short_Lived_Tree_2, Sum);
Check := Check + Sum;
end loop;
Put(Item => Iterations * 2, Width => 0);
Put(Ht & " trees of depth ");
Put(Item => Depth, Width => 0);
Put(Ht & " check: ");
Put(Item => Check, Width => 0);
New_Line;
Depth := Depth + 2;
end loop;
Put("long lived tree of depth ");
Put(Item => Max_Depth, Width => 0);
Put(Ht & " check: ");
Item_Check(Long_Lived_Tree, Check);
Put(Item => Check, Width => 0);
New_Line;
accept Complete;
end the_work;

begin
if Argument_Count > 0 then
N := Positive'Value(Argument(1));
end if;
the_work.start(N);
the_work.complete;
end BinaryTrees_tasktest;

------------------------------------------------------

jonathan

unread,
Feb 20, 2011, 10:45:33 AM2/20/11
to
On Feb 20, 2:34 pm, Brian Drummond <brian_drumm...@btconnect.com>
wrote:

> I removed virtually the entire body of the program into a single task.
> This change alone doubles the "CPU" time. There appears to be a 100% penalty
> associated simply with running the original program from within a second task.


I noticed that too. As soon as I declare a task, (you
don't have to use it for anything) then run-time doubles.

So as far as I can tell, anything with gnat tasks (p-threads)
has this behavior. The C program that uses p-threads (C#5)
has the same behavior.

The more successful multicore programs use OpenMP, which
uses fibers, or coroutine-like threads.

If you want a multi-core Ada version, this might be a good
place to break out the Annex E (distributed systems) approach.

The shootout runs on Ubuntu (Debian really). Now that polyorb
is part of the standard Debian distribution (Squeeze), an
Annex E solution based on polyorb might be possible.

http://packages.debian.org/sid/polyorb-servers

If any tasks are declared on the polyorb side of things,
then I would not be surprised if this fails, but otherwise
it would be nice to see an Annex E solution in public view.

There are other distributed-system candidates, but I don't
intend to do much until the benchmark rules are ... clarified.
I'll whine more about that in another post.

J.

Brian Drummond

unread,
Feb 20, 2011, 11:18:27 AM2/20/11
to
On Sun, 20 Feb 2011 07:45:33 -0800 (PST), jonathan <john...@googlemail.com>
wrote:

>On Feb 20, 2:34 pm, Brian Drummond <brian_drumm...@btconnect.com>
>wrote:
>
>> I removed virtually the entire body of the program into a single task.
>> This change alone doubles the "CPU" time. There appears to be a 100% penalty
>> associated simply with running the original program from within a second task.
>
>
>I noticed that too. As soon as I declare a task, (you
>don't have to use it for anything) then run-time doubles.
>
>So as far as I can tell, anything with gnat tasks (p-threads)
>has this behavior. The C program that uses p-threads (C#5)
>has the same behavior.

Thanks for the confirmation. I'll have to do some reading about Annex E, and
PolyOrb...

Meanwhile, slightly better news...

on my 4-core machine (AMD Phenom 955X4) the original single-CPU code runs in 30
seconds, and moving to a single task increases runtime to about 50 seconds.

Using 4 tasks, the CPU time remains about 50 seconds, but the elapsed time is
reduced to 18 seconds ( 9 tasks give 400% CPU usage for 12 seconds, then 100%
for a further 6). Assuming this scales to the reference machine (single-CPU 37
seconds, but Intel rather than AMD) would give a runtime around 23s, and move
Ada up from 16th to 10th place.

Or if I can employ finer grained tasking to balance the load better...
well, 16 seconds would give us 6th place, 15s would give 4th place.

- Brian

jonathan

unread,
Feb 20, 2011, 11:42:06 AM2/20/11
to

> On Sat, 19 Feb 2011 15:36:45 +0100, Georg Bauhaus
>
> C #includes <apr_pool.h>, that is, it exercises the Apache memory pool,
> not what is available with plain C.
>
> C++ #includes a similar thing from the Boost libraries.


On Feb 19, 6:25 pm, Brian Drummond <brian_drumm...@btconnect.com>
wrote:
>


> even though you are not allowed to supply your own pool.
> Possibly harsh, but I can see the logic behind it.
>

Free lists are forbidden also.

The rules could not be clearer:

Please don't implement your own custom memory pool or
free list.

A few more observations.

First a minor point: On my machine I can make the Ada version the
same speed as the (single core) C#1 and the C++#2 (about 10%
faster) with a small change (I placed the patch at the
end of post).

Now a not so minor point: the (single core) gcc
compilations, Ada and C#1 and C++#2 are doing the same things and
running the same rate. They are allocating the same amount
of memory the same number of times and deallocating the same
amount of memory the same number of times.

So here's the puzzle. How do the C#7 and C++#6 programs
(single core) run so much faster.

Can we speed up X number of calls
to "new" and Y calls to "Unchecked_Deallocation" without
reducing X or Y?

The C++#6 uses the Boost library's "object_pool.hpp", so I
suggest we go straight to the magical source code:

http://sourceforge.net/projects/boost/files/boost/1.45.0/

Find the file here:

boost_1_45_0/boost/pool/object_pool.hpp

Find the destructor,

~object_pool();

Now do a search for "free list", and you'll find things like:

// Start 'freed_iter' at beginning of free list
void * freed_iter = this->first;

// Increment freed_iter to point to next in free list
freed_iter = nextof(freed_iter);

Free lists. Oh what a shock.


The fastest C program, C#7, links to the apache run-time
system memory pool. To find out what it does, just google
"apr_pools.h", and click away.

The inner loop of the C#7 benchmark
uses a call to "apr_pool_clear" to deallocate the memory:

void apr_pool_clear (apr_pool_t * p)

Remarks:
This does not actually free the memory, it just allows
the pool to re-use this memory for the next allocation.

So it turns out that no memory at all is deallocated in the
C#7 benchmark's inner loop.

In the original benchmark, this inner loop is where the
memory is actually freed. For example, in the Ada version this
inner loop is where all the Unchecked_Deallocations are called.

But in the C#7 version, the deallocation is moved outside
of the benchmarking loop to a call to "apr_pool_destroy":

void apr_pool_destroy (apr_pool_t * p)

Destroy the pool. This takes similar action as
apr_pool_clear() and then frees all the memory.

Remarks:
This will actually free the memory.

I don't read C/C++ with any confidence, but it looks
to me like they get their faster-than-light performance by
dispensing with calls to Unchecked_Deallocation.

J.


* the small change: replace

Short_Lived_Tree_1 := Bottom_Up_Tree (Item => I, Depth => d);
Short_Lived_Tree_2 := Bottom_Up_Tree (Item =>-I, Depth => d);

Item_Check (Short_Lived_Tree_1, Sum);


Check := check + Sum;

Item_Check (Short_Lived_Tree_2, Sum);


Check := Check + Sum;

with

-- allocate new memory:
Short_Lived_Tree_1 := Bottom_Up_Tree (Item => I, Depth => d);
--Free mem allocated to Tree_2:
Item_Check (Short_Lived_Tree_1, Sum);


Check := check + Sum;

-- allocate new memory:
Short_Lived_Tree_2 := Bottom_Up_Tree (Item =>-I, Depth => d);
--Free mem allocated to Tree_2:
Item_Check (Short_Lived_Tree_2, Sum);

Pascal Obry

unread,
Feb 20, 2011, 2:49:10 PM2/20/11
to jonathan
Jonathan,

> If you want a multi-core Ada version, this might be a good
> place to break out the Annex E (distributed systems) approach.

I really don't see how a distributed application could run faster than a
multi-threaded one on a single machine! So PolyORB is certainly not the
solution to this problem.

Pascal.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net - http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B

Brian Drummond

unread,
Feb 20, 2011, 2:54:30 PM2/20/11
to
On 20 Feb 2011 00:20:56 GMT, "Luis P. Mendes" <luisl...@gmailXXX.com> wrote:

>Sat, 19 Feb 2011 13:07:58 +0000, Brian Drummond escreveu:
>
>> Ada can easily bind to C libraries, it's standard and well documented.

>> C++ bindings are also possible, but with some work and (currently) some


>> limitations.
>> A GCC recent enough to support "-f-dump-ada-spec" will auto-generate an
>> Ada spec from C++ sources, which will save a lot of the work.
>

>Would you mind giving me an example?

See below...

>Please consider the following C++ code:
>===== header file
>$ cat aleatorio.h

>===== source file
>$ cat aleatorio.cpp

>=====


>
>From Ada, how can I use these h and cpp files to call, for example,
>gerarAleatorioInteiro(0,10)?

Here is what I did.

1) Comment out the #includes in aleatorio.h.
They are unused; enlarge the namespace; and are repeated in the .cpp file
anyway.

Save it as aleatorio.hpp. (This forces C++-style Ada specs rather than C-style,
which is essential to link to C++ code)

2) Generate the specs automatically.
/usr/gnat/bin/gcc -fdump-ada-spec aleatorio.hpp
produces an automatic spec file
aleatorio_hpp.ads
-------------------------------
with Interfaces.C; use Interfaces.C;

package aleatorio_hpp is

procedure iniciarSemente; -- aleatorio.hpp:8:21
pragma Import (CPP, iniciarSemente, "_Z14iniciarSementev");

function gerarAleatorio (a : int; b : int) return double;
-- aleatorio.hpp:9:35
pragma Import (CPP, gerarAleatorio, "_Z14gerarAleatorioii");

function gerarAleatorioInteiro (a : int; b : int) return int;
-- aleatorio.hpp:10:39
pragma Import (CPP, gerarAleatorioInteiro, "_Z21gerarAleatorioInteiroii");

function arredondar (res : double) return int;
-- aleatorio.hpp:11:26
pragma Import (CPP, arredondar, "_Z10arredondard");

end aleatorio_hpp;
-------------------------------

3) Not essential but recommended ...

Write a wrapper package to hide the C interface and C types, and to make the
interface look like Ada: random_wrapper.ads, random_wrapper.adb.
(This constitutes a "thick binding", while package aleatorio_h is a "thin
binding")
At this point you can choose what to expose to the Ada code;
I have been selective (or lazy!)

------------ random_wrapper.ads --------------
package random_wrapper is

procedure initialise_seed;
function random_between(a,b : in Integer) return Integer;

end random_wrapper;
------------ random_wrapper.adb --------------
with aleatorio_hpp;
use aleatorio_hpp;
with Interfaces.C;
use Interfaces.C;

package body random_wrapper is

procedure initialise_seed is
begin
iniciarSemente;
end initialise_seed;

function random_between(a,b : in Integer) return Integer is
begin
return Integer(gerarAleatorioInteiro (int(a), int(b)));
end random_between;

end random_wrapper;
----------------------------------------------

4) Write your Ada program...
------------ random.adb ----------------------
--Random number tester

with Ada.Text_Io; use Ada.Text_Io;
with Ada.Integer_Text_Io; use Ada.Integer_Text_Io;

with random_wrapper; use random_wrapper;

procedure random is

begin
initialise_seed;
Put("Five random numbers");
New_Line;
for i in 1 .. 5 loop
Put(random_between(1,100));
New_Line;
end loop;
end random;
----------------------------------------------

5) Compile the C++ portion (more complex examples may need a Makefile)

g++ -g -m64 -c -o aleatorio.o aleatorio.cpp

6) Build the Ada portion.

gnatmake -m64 -gnat05 -gnato -gnatwa -fstack-check -o random random.adb \
-largs ./aleatorio.o -lstdc++

Note additional arguments "-largs ./aleatorio.o -lstdc++" to gnatlink;
extend these if you add more C++ objects and libraries.

7)
Run it.

./random
Five random numbers
9
40
2
77
66

Brian Drummond

unread,
Feb 20, 2011, 2:57:12 PM2/20/11
to
On Sun, 20 Feb 2011 20:49:10 +0100, Pascal Obry <pas...@obry.net> wrote:

>Jonathan,
>
>> If you want a multi-core Ada version, this might be a good
>> place to break out the Annex E (distributed systems) approach.
>
>I really don't see how a distributed application could run faster than a
>multi-threaded one on a single machine! So PolyORB is certainly not the
>solution to this problem.

You are probably right ... unless PolyOrb performs better than the 100% overhead
(on this test case) imposed by the pthread library (which I think is how Gnat
implements its tasking)

- Brian.

Brian Drummond

unread,
Feb 20, 2011, 3:02:04 PM2/20/11
to
On Sun, 20 Feb 2011 08:42:06 -0800 (PST), jonathan <john...@googlemail.com>
wrote:

>


>> On Sat, 19 Feb 2011 15:36:45 +0100, Georg Bauhaus
>>
>> C #includes <apr_pool.h>, that is, it exercises the Apache memory pool,
>> not what is available with plain C.
>>
>> C++ #includes a similar thing from the Boost libraries.
>
>
>On Feb 19, 6:25 pm, Brian Drummond <brian_drumm...@btconnect.com>
>wrote:
>>
>> even though you are not allowed to supply your own pool.
>> Possibly harsh, but I can see the logic behind it.
>
>Free lists are forbidden also.
>
>The rules could not be clearer:
>
> Please don't implement your own custom memory pool or
> free list.

I read the rule differently; and here's my understanding of the logic behind it:

Use any memory pool and/or free list you like, as long as they are publicly
available (e.g. from Boost, Apache, perhaps the Booch Ada Components etc) but
don't create one specifically tuned for the benchmark.

Thus it encourages quality of not only language and compiler implementation, but
also libraries and other re-usable components. Which is realistic, because they
are part of the package you would consider when choosing a language for a
project.

>A few more observations.
>
>First a minor point: On my machine I can make the Ada version the
>same speed as the (single core) C#1 and the C++#2 (about 10%
>faster) with a small change (I placed the patch at the
>end of post).

So, like for like, Ada has the same performance...

>So here's the puzzle. How do the C#7 and C++#6 programs
>(single core) run so much faster.
>

>The C++#6 uses the Boost library's "object_pool.hpp", so I
>suggest we go straight to the magical source code:
>
>http://sourceforge.net/projects/boost/files/boost/1.45.0/

... but the available C/C++ libraries are more highly tuned (or cheat...)?

- Brian

jonathan

unread,
Feb 20, 2011, 3:10:13 PM2/20/11
to
On Feb 20, 7:57 pm, Brian Drummond <brian_drumm...@btconnect.com>
wrote:

My preferred plan B would link to MPI, which distributes mpi-tasks
over the available cores. I don't know if that will work either!

J.

Pascal Obry

unread,
Feb 20, 2011, 4:15:56 PM2/20/11
to jonathan

Jonathan,

> My preferred plan B would link to MPI, which distributes mpi-tasks
> over the available cores. I don't know if that will work either!

That won't be better. MPI is for non shared memory used in clusters.
Tasking on a single machine is the best option AFAIK.

Vinzent Hoefler

unread,
Feb 20, 2011, 4:26:18 PM2/20/11
to
Pascal Obry wrote:

>> My preferred plan B would link to MPI, which distributes mpi-tasks
>> over the available cores. I don't know if that will work either!
>
> That won't be better. MPI is for non shared memory used in clusters.

Sure?

I may remember the MPI-Spec wrong, but parallelizing for-loops on a
cluster do not seem a very efficient way and that was at least one of
the things the specification described. And MPI was generally designed
as (C-)compiler extension, so clusters do not seem an appropriate target
neither.


Vinzent.

--
You know, we're sitting on four million pounds of fuel, one nuclear weapon,
and a thing that has 270,000 moving parts built by the lowest bidder.
Makes you feel good, doesn't it?
-- Rockhound, "Armageddon"

Vinzent Hoefler

unread,
Feb 20, 2011, 4:33:21 PM2/20/11
to
Vinzent Hoefler wrote:

> Pascal Obry wrote:
>
>>> My preferred plan B would link to MPI, which distributes mpi-tasks
>>> over the available cores. I don't know if that will work either!
>>
>> That won't be better. MPI is for non shared memory used in clusters.
>
> Sure?

Ah. After sending the post, I recognized the source of the freaking
acronyms confusion. I sure meant OpenMP, not (Open)MPI.

Pascal Obry

unread,
Feb 20, 2011, 4:36:19 PM2/20/11
to Vinzent Hoefler

Vinzent,

> Sure?

I've used OpenMP, MPI, Ada tasking, Ada Annex-E and PolyORB.

> I may remember the MPI-Spec wrong, but parallelizing for-loops on a
> cluster do not seem a very efficient way and that was at least one of
> the things the specification described. And MPI was generally designed
> as (C-)compiler extension, so clusters do not seem an appropriate target
> neither.

MPI is a library, Aren't you confusing with OpenMP?

Pascal.

Vinzent Hoefler

unread,
Feb 20, 2011, 4:50:20 PM2/20/11
to
Pascal Obry wrote:

> MPI is a library, Aren't you confusing with OpenMP?

Yeah.

jonathan

unread,
Feb 20, 2011, 5:18:51 PM2/20/11
to

Hi Pascal,

MPI is more versatile than you think. When I run 8 MPI
tasks on a single machine with 8 cores, then it works beautifully.
When I run 48 MPI tasks on six 8-core machines, then it works fine,
but
you notice the degraded bandwidth from the network. All of this is
transparent to user .. all he sees is the 8 cores or the 48 cores. The
basic MPI communications model is a remote rendezvous, which I use
extensively. I have nothing but praise for it. Asynchronous
communication
is available but has never been much use in my applications.

But actually, I'ld be surprised if MPI worked well in the present
application (benchmarking "new" and unchecked_deallocation). I am
*still* not planning to do any work on this benchmark.

J.

Simon Wright

unread,
Feb 20, 2011, 5:47:05 PM2/20/11
to
Brian Drummond <brian_d...@btconnect.com> writes:

> the 100% overhead (on this test case) imposed by the pthread library
> (which I think is how Gnat implements its tasking)

Here (Mac OS X, GCC 4.6.0 x86_64 experimental), I tried modifying the
Ada code to use the same tasking (threading) structure as the C GNU GCC
#5 version. Result (I only checked with parameter 16):

C: real 5.1 user 9.0
GNAT (orig): real 6.0 user 5.8
GNAT (mod): real 5.3 user 9.4

(the 'user' value, which is what time(1) reports, is apparently the
total CPU time, while the 'real' time is the elapsed time; this machine
has 2 cores, both it seems running at about 90% in the test).

Brian Drummond

unread,
Feb 21, 2011, 7:52:24 AM2/21/11
to

So again, there is an overhead (maybe 80%) imposed by tasking, and significant
improvements won't appear until >2 processors.

I can't be sure I'm reading the C correctly, but it looks as if it's creating a
new pthread (task) for each depth step, similar to my first attempt.

I have now decoupled the number of tasks from the problem, to simplify
experiments with different numbers of tasks, and improve load balancing.
It runs approx. 4x as fast with 4 or 8 tasks as it does with 1 task (on a 4-core
machine!), therefore only about 2x as fast as it does without tasking.

As this is my first experiment with tasking, comments are welcome (and I'd be
interested to see your version). If people think this is worth submitting to the
shootout, I'll go ahead.

- Brian

----------------------------------------------------------------
-- BinaryTrees


--
-- Ada 95 (GNAT)
--
-- Contributed by Jim Rogers
-- Tasking experiment : Brian Drummond
----------------------------------------------------------------
with Treenodes; use Treenodes;
with Ada.Text_Io; use Ada.Text_Io;
with Ada.Integer_Text_Io; use Ada.Integer_Text_Io;
with Ada.Command_Line; use Ada.Command_Line;
with Ada.Characters.Latin_1; use Ada.Characters.Latin_1;

procedure Binarytrees_tasking is
-- Change "CPUs" to control number of tasks created
CPUs : constant Positive := 8;
BlockSize : Positive;


Min_Depth : constant Positive := 4;

N : Natural := 1;
Stretch_Tree : TreeNode;
Long_Lived_Tree : TreeNode;
Max_Depth : Positive;
Stretch_Depth : Positive;
Iteration : Positive;
Iterations : Positive;
Sum : Integer;
Check : Integer;
Depth : Natural;

task type check_this_depth is
entry Start(Iteration, Size : Positive; To_Depth :in Natural);
entry Complete(Result : out Integer);
end check_this_depth;

task body check_this_depth is


Check : Integer;
Sum : Integer;
Depth : Natural;

First : Positive;
Last : Positive;
Short_Lived_Tree_1 : TreeNode;
Short_Lived_Tree_2 : TreeNode;

begin
loop
select
accept Start(Iteration, Size : Positive; To_Depth :in Natural) do
First := Iteration;
Last := Iteration + Size - 1;
Depth := To_Depth;
end Start;
Check := 0;
for I in First .. Last loop


Short_Lived_Tree_1 := Bottom_Up_Tree(Item => I, Depth => Depth);
Short_Lived_Tree_2 := Bottom_Up_Tree(Item =>-I, Depth => Depth);
Item_Check(Short_Lived_Tree_1, Sum);

Check := Check + Sum;


Item_Check(Short_Lived_Tree_2, Sum);
Check := Check + Sum;
end loop;

accept Complete(Result : out Integer) do
Result := Check;
end Complete;
or
Terminate;
end select;
end loop;
end check_this_depth;

subtype Task_Count is positive range 1 .. CPUs;
Tasks : array (Task_Count) of check_this_depth;

begin
if Argument_Count > 0 then
N := Positive'Value(Argument(1));
end if;

Max_Depth := Positive'Max(Min_Depth + 2, N);
Stretch_Depth := Max_Depth + 1;
Stretch_Tree := Bottom_Up_Tree(0, Stretch_Depth);
Item_Check(Stretch_Tree, Check);
Put("stretch tree of depth ");
Put(Item => Stretch_Depth, Width => 1);
Put(Ht & " check: ");
Put(Item => Check, Width => 1);
New_Line;

Long_Lived_Tree := Bottom_Up_Tree(0, Max_Depth);

Depth := Min_Depth;
while Depth <= Max_Depth loop
Iterations := 2**(Max_Depth - Depth + Min_Depth);
Check := 0;

-- Setup tasking parameters for reasonable task granularity
-- Too large and we can't balance CPU loads
-- Too small and we waste time in task switches
-- Not very critical - anything more complex is probably a waste of effort

BlockSize := 2**10;
if Iterations < BlockSize * CPUs then
BlockSize := 1;
end if;

-- Check that Iterations is a multiple of Blocksize * CPUs
-- Error out otherwise (dealing with remainder is trivial but tedious)
Pragma Assert(Iterations mod( BlockSize * CPUs) = 0,
"Iteration count not supported!");

-- for I in 1..Iterations loop
Iteration := 1;
while Iteration <= Iterations loop
for j in Task_Count loop
Tasks(j).Start(Iteration, Blocksize, Depth);
Iteration := Iteration + BlockSize;
end loop;
for j in Task_Count loop
Tasks(j).Complete(Sum);


Check := Check + Sum;
end loop;
end loop;
Put(Item => Iterations * 2, Width => 0);
Put(Ht & " trees of depth ");
Put(Item => Depth, Width => 0);
Put(Ht & " check: ");
Put(Item => Check, Width => 0);
New_Line;
Depth := Depth + 2;
end loop;
Put("long lived tree of depth ");
Put(Item => Max_Depth, Width => 0);
Put(Ht & " check: ");
Item_Check(Long_Lived_Tree, Check);
Put(Item => Check, Width => 0);
New_Line;

end BinaryTrees_tasking;

Simon Wright

unread,
Feb 21, 2011, 8:44:04 AM2/21/11
to
Brian Drummond <brian_d...@btconnect.com> writes:

> As this is my first experiment with tasking, comments are welcome (and
> I'd be interested to see your version).

See end.

> If people think this is worth submitting to the shootout, I'll go
> ahead.

I think it definitely is: the only Ada code for binary-trees is
single-threaded, so looks needlessly poor.

binarytrees.ada

Shark8

unread,
Feb 21, 2011, 9:15:44 PM2/21/11
to
On Feb 21, 4:52 am, Brian Drummond <brian_drumm...@btconnect.com>
wrote:

>
> As this is my first experiment with tasking, comments are welcome (and I'd be
> interested to see your version). If people think this is worth submitting to the
> shootout, I'll go ahead.
>
> - Brian

I used arrays for the most part, and then expanded it out to a
recursive-definition for the trees which would be too large for
the stack during creation.

It may be going against the spirit of the competition, but nothing
there said that we couldn't use arrays as binary-trees.


-- Package B_Tree
-- by Joey Fish

Package B_Tree is


-- Contest rules state:
-- define a tree node class and methods, a tree node record and
procedures,
-- or an algebraic data type and functions.
--
-- B_Tree is the definition of such a record and procedures.

Type Binary_Tree is Private;

Function Build_Tree (Item : Integer; Depth : Natural) Return
Binary_Tree;
Function Subtree (Tree : Binary_Tree; Left : Boolean) Return
Binary_Tree;
Function Item_Check (This : Binary_Tree) Return Integer;
Procedure Free (Tree : In Out Binary_Tree);

Private

Type Node_Data;
Type Data_Access is Access Node_Data;
SubType Not_Null_Data_Access is Not Null Data_Access;

Function Empty Return Not_Null_Data_Access;
Type Binary_Tree( Extension : Boolean:= False ) is Record
Data : Not_Null_Data_Access:= Empty;
End Record;

End B_Tree;

--- B_Trees body
with
Ada.Text_IO,
--Ada.Numerics.Generic_Elementary_Functions,
Unchecked_Deallocation;

Package Body B_Tree is

-- In some cases the allocataion of the array is too large, so we
can split
-- that off into another tree, for that we have Tree_Array, which
is a
-- Boolean-indexed array. {The Index is also shorthand for Is_left
on such.}
Type Tree_Array is Array (Boolean) of Binary_Tree;

-- For trees of up to 2**17 items we store the nodes as a simple
array.
Type Integer_Array is Array (Positive Range <>) of Integer;
Type Access_Integers is Access Integer_Array;
Type Node_Data(Extended : Boolean:= False) is Record
Case Extended is
When False => A : Not Null Access_Integers;
When True => B : Tree_Array;
end Case;
End Record;


-- Returns the Empty List's Data.
Function Empty Return Not_Null_Data_Access is
begin
Return New Node_Data'( A => New Integer_Array'(2..1 => 0),
Others => <> );
end Empty;

-- We'll need an integer-version of logrithm in base-2
Function lg( X : In Positive ) Return Natural is
--------------------------------------------
-- Base-2 Log with a jump-table for the --
-- range 1..2**17-1 and a recursive call --
-- for all values greater. --
--------------------------------------------
begin
Case X Is
When 2**00..2**01-1 => Return 0;
When 2**01..2**02-1 => Return 1;
When 2**02..2**03-1 => Return 2;
When 2**03..2**04-1 => Return 3;
When 2**04..2**05-1 => Return 4;
When 2**05..2**06-1 => Return 5;
When 2**06..2**07-1 => Return 6;
When 2**07..2**08-1 => Return 7;
When 2**08..2**09-1 => Return 8;
When 2**09..2**10-1 => Return 9;
When 2**10..2**11-1 => Return 10;
When 2**11..2**12-1 => Return 11;
When 2**12..2**13-1 => Return 12;
When 2**13..2**14-1 => Return 13;
When 2**14..2**15-1 => Return 14;
When 2**15..2**16-1 => Return 15;
When 2**16..2**17-1 => Return 16;
When Others => Return 16 + lg( X / 2**16 );
End Case;
end lg;

Function Build_Tree (Item : Integer; Depth : Natural) Return
Binary_Tree is
-- Now we need a function to allow the calculation of a node's
value
-- given that node's index.
Function Value( Index : Positive ) Return Integer is
Level : Integer:= lg( Index );
-- Note: That is the same as
-- Integer( Float'Truncation( Log( Float(Index),2.0 ) ) );
-- but without the Integer -> Float & Float -> Integer conversions.
begin
Return (-2**(1+Level)) + 1 + Index;
end;

Begin
If Depth < 17 then
Return Result : Binary_Tree do
Result.Data:= New Node_Data'
( A => New Integer_Array'(1..2**Depth-1 => <>), Others => <> );
For Index in Result.Data.A.All'Range Loop
Result.Data.All.A.All( Index ):= Value(Index) + Item;
End Loop;
End Return;
else
Return Result : Binary_Tree do
Result.Data:= New Node_Data'
( B =>
(True => Build_Tree(-1,Depth-1), False =>
Build_Tree(0,Depth-1)),
Extended => True );
End Return;

end if;
End Build_Tree;

Function Subtree (Tree : Binary_Tree; Left : Boolean) Return
Binary_Tree is
Begin
if Tree.Data.Extended then
-- If it is a large enough tree, then we already have it
split.
Return Tree.Data.B(Left);
else
-- If not then we just need to calculate the middle and
return the
-- proper half [excluding the first (root) node.
Declare
Data : Integer_Array Renames Tree.Data.All.A.All;
Data_Length : Natural:= Data'Length;

Mid_Point : Positive:= (Data_Length/2) + 1;
SubType LeftTree is Positive Range
Positive'Succ(1)..Mid_Point;
SubType RightTree is Positive Range
Positive'Succ(Mid_Point)..Data_Length;
Begin
Return Result : Binary_Tree Do
if Left then
Result.Data:= New Node_Data'
( A => New Integer_Array'( Data(LeftTree) ),
Others => <> );
else
Result.Data:= New Node_Data'
( A => New Integer_Array'( Data(RightTree) ),
Others => <> );
end if;
End Return;
End;
end if;
End Subtree;

Function Check_Sum( Data: In Integer_Array ) Return Integer is
Depth : Natural:= lg(Data'Length);
SubType Internal_Nodes is Positive Range 1..2**Depth-1;
begin
Return Result : Integer:= 0 do
For Index in Internal_Nodes Loop
Declare
Left : Positive:= 2*Index;
Right : Positive:= Left+1;
Begin
If Index mod 2 = 1 then
Result:= Result - Right + Left;
else
Result:= Result + Right - Left;
end if;
End;
End Loop;
End Return;
end Check_Sum;

Function Item_Check (This : Binary_Tree) Return Integer is
-- For large trees this function calls itself recursively until
the
-- smaller format is encountered; otherwise, for small trees, it
acts as
-- a pass-througn to Check_Sum.
Begin
If This.Data.Extended then
Declare

Begin
Return Result: Integer:= -1 do
Result:= Result
+ Item_Check( This.Data.B(False) )
- Item_Check( This.Data.B(True ) );
End Return;
End;
else
Declare
Data : Integer_Array Renames This.Data.All.A.All;
Begin
Return Check_Sum( Data );
End;
end if;
End Item_Check;

Procedure Free (Tree : In Out Binary_Tree) is
procedure Deallocate is new
Unchecked_Deallocation(Integer_Array, Access_Integers);
procedure Deallocate is new
Unchecked_Deallocation(Node_Data, Data_Access);

Procedure Recursive_Free (Tree : In Out Binary_Tree) is
begin
if Tree.Data.All.Extended then
Recursive_Free( Tree.Data.B(True ) );
Recursive_Free( Tree.Data.B(False) );
Declare
Data : Data_Access;
For Data'Address Use Tree.Data'Address;
Pragma Import( Ada, Data );
Begin
Deallocate(Data);
End;
else
Declare
Data : Data_Access;
For Data'Address Use Tree.Data.All.A'Address;
Pragma Import( Ada, Data );
Begin
Deallocate( Data );
Data:= Empty;
End;
end if;
end Recursive_Free;

begin
Recursive_Free( Tree );
Tree.Data:= Empty;
end Free;

Begin
Null;
End B_Tree;

-- BinaryTrees.adb
-- by Jim Rogers
-- modified by Joey Fish

With
B_Tree,
Ada.Text_Io,
Ada.Real_Time,
Ada.Command_Line,
Ada.Characters.Latin_1,
;

Use
B_Tree,
Ada.Text_Io,
Ada.Command_Line,
Ada.Integer_Text_Io,
Ada.Characters.Latin_1
;

procedure BinaryTrees is
--Depths
Min_Depth : Constant Positive := 4;
Max_Depth : Positive;
Stretch_Depth: Positive;
N : Natural := 1;

-- Trees
Stretch_Tree,
Long_Lived_Tree : Binary_Tree;


Check,


Sum : Integer;
Depth : Natural;
Iterations : Positive;

Package Fn is New
Ada.Numerics.Generic_Elementary_Functions( Float );
Function Value( Index : Positive ) Return Integer is
Level : Integer:=
Integer( Float'Truncation( Fn.Log( Float(Index),2.0 ) ) );
begin
Return (-2**(1+Level)) + 1 + Index;
end;


begin
-- For Index in 1..2**3-1 loop
-- Put_Line( Value(Index)'img );
-- end loop;

-- Declare
-- -- allocate new memory:
-- Short_Lived_Tree_1: Binary_Tree:= Build_Tree(0, 20);
-- Begin
-- Sum:= Item_Check (Short_Lived_Tree_1);
-- -- Check := Check + Sum;
-- -- Free( Short_Lived_Tree_1 );
-- Put(Check'Img);
-- End;


if Argument_Count > 0 then
N := Positive'Value(Argument(1));
end if;
Max_Depth := Positive'Max(Min_Depth + 2, N);
Stretch_Depth := Max_Depth + 1;

Stretch_Tree := Build_Tree(0, Stretch_Depth);
Check:= Item_Check(Stretch_Tree);


Put("stretch tree of depth ");
Put(Item => Stretch_Depth, Width => 1);
Put(Ht & " check: ");
Put(Item => Check, Width => 1);
New_Line;

Long_Lived_Tree := Build_Tree(0, Max_Depth);

Depth := Min_Depth;
while Depth <= Max_Depth loop
Iterations := 2**(Max_Depth - Depth + Min_Depth);
Check := 0;

for I in 1..Iterations loop

Declare
Short_Lived_Tree_1: Binary_Tree:= Build_Tree(I, Depth);
Begin
Sum:= Item_Check (Short_Lived_Tree_1);


Check := Check + Sum;

Free( Short_Lived_Tree_1 );
End;


Declare
Short_Lived_Tree_2: Binary_Tree:= Build_Tree(-I, Depth);
Begin
Sum:= Item_Check (Short_Lived_Tree_2);


Check := Check + Sum;

Free( Short_Lived_Tree_2 );
End;
end loop;

Put(Item => Iterations * 2, Width => 0);
Put(Ht & " trees of depth ");
Put(Item => Depth, Width => 0);
Put(Ht & " check: ");
Put(Item => Check, Width => 0);
New_Line;
Depth := Depth + 2;
end loop;
Put("long lived tree of depth ");
Put(Item => Max_Depth, Width => 0);
Put(Ht & " check: ");

check:= Item_Check(Long_Lived_Tree);


Put(Item => Check, Width => 0);
New_Line;

end BinaryTrees;

Luis P. Mendes

unread,
Feb 23, 2011, 5:19:50 PM2/23/11
to

Thank you very much Brian!

I guess that this example or another one like this could be included in
wiki or other place for newcomers.


Luis

Brian Drummond

unread,
Feb 23, 2011, 7:19:02 PM2/23/11
to

Submitted and accepted, but puzzlingly poor
("CPU seconds" blowing up 3x rather than 2x).

I don't have a quad-core Intel system, but the x86 (32-bit) results are actually
worse than my laptop!

- Brian

Jacob Sparre Andersen

unread,
Feb 24, 2011, 2:41:32 AM2/24/11
to
Brian Drummond wrote:

> Submitted and accepted, but puzzlingly poor
> ("CPU seconds" blowing up 3x rather than 2x).
>
> I don't have a quad-core Intel system, but the x86 (32-bit) results
> are actually worse than my laptop!

Have you checked which compiler version and options the shootout uses?
It could explain a part of the difference.

Greetings,

Jacob
--
Photo of the day:
http://billeder.sparre-andersen.dk/dagens/2011-02-18

Brian Drummond

unread,
Feb 24, 2011, 12:06:07 PM2/24/11
to
On 23 Feb 2011 22:19:50 GMT, "Luis P. Mendes" <luisl...@gmailXXX.com> wrote:

>Sun, 20 Feb 2011 19:54:30 +0000, Brian Drummond escreveu:
>
>> On 20 Feb 2011 00:20:56 GMT, "Luis P. Mendes" <luisl...@gmailXXX.com>
>> wrote:
>>
>>>Sat, 19 Feb 2011 13:07:58 +0000, Brian Drummond escreveu:
>>>
>>>> Ada can easily bind to C libraries, it's standard and well documented.
>>>> C++ bindings are also possible, but with some work

>>>Would you mind giving me an example?
>> See below...

>Thank you very much Brian!
>
>I guess that this example or another one like this could be included in
>wiki or other place for newcomers.

If you want, you are welcome to use it for that purpose...

- Brian

Luis P. Mendes

unread,
Feb 27, 2011, 12:51:16 PM2/27/11
to

I did so! It can be found at
http://wiki.ada-dk.org/index.php/C%2B%2B_bindings_example

Thank you once again Brian,

Luis

Adrian Hoe

unread,
Mar 1, 2011, 3:10:21 AM3/1/11
to
On Feb 20, 8:13 am, "Luis P. Mendes" <luislupe...@gmailXXX.com> wrote:
> Fri, 18 Feb 2011 16:20:55 -0800, Edward Fish escreveu:
>
>
>
>
>
> > On Feb 18, 2:52 pm, "Luis P. Mendes" <luislupe...@gmailXXX.com> wrote:
> >> Hi,
>
> >> I have two projects to work, one of them in the data mining field and
> >> another regarding xml parsing.
> >> I've been learning C++ (from a Python, Pascal, VB background), due to
> >> it being fast (sure it depends on the implementation) and because it
> >> has a lot of libraries.
>
> >> But I find C++ a very complex language and Ada appeals to me specially
> >> for its overall safety.  Or maybe also because I don't like to go with
> >> majorities... :-)

>
> >> I have some questions, however, that I'd like to be answered: 1. If Ada
> >> is more type safe and restricted than C++, how can it be significantly
> >> slower?
> >> Please see:http://shootout.alioth.debian.org/u64q/benchmark.php?
> >> test=all&lang=gnat
> >> where for some tests, Ada is 2x, 3x, 4x and 5x slower. For the data
> >> mining field as I want to implement, speed is essential. I'll code in
> >> Linux and use gcc as a compiler/linker.
>
> >> 2. In C++ I can use lots of libraries. I'm thinking on data
> >> visualization libraries, for
> >> examplehttp://www.graphviz.org/Gallery/undirected/softmaint.html.
> >> I've read that Ada can use some C bindings.  Can I use any C library?
> >> Some? Is it easy?
> >> I don't want to drop C++ for Ada to build a project that later has to
> >> be rewritten in C++ because of lack of libraries.
>
> >> 3. Is there any kind of fast xml stream parser library? No DOM or SAX,
> >> just to read attributes.
>
> >> Luis
>
> > I'm going to answer in reverse-order. #3 - There is XMLAda; I mention it
> > only because I've heard of it. I haven't had a need for XML, much less a
> > FAST XML parser. But consider that you might not NEED a full-blown XML
> > parser if what you're doing is relatively simple: you could instead have
> > your type override the 'Read & 'Write attributes in the proper XML
> > format and use Streams.
>
> > #2 - This is highly dependent on you. Some people are perfectly happy
> > with a light binding, in which case it's EASY; some people want a THICK
> > binding in which case it's a bit harder because you have to design an
> > interface which essentially a) hides the C/C++ imports & calls, and b)
> > is in the "Ada Style." To take OpenGL for example instead of a function
> > taking a glenum you would subtype it out so that it takes ONLY the valid
> > values.
>
> > #1 - Speed is HEAVILY dependent on the implementation. Consider, for a
> > moment, sorting. A bubble-sort and a quick-sort are exactly the same in
> > terms of Input/Output [on an array of discrete types], but the speeds
> > are radically different. As Georg mentioned that shootout program used
> > the Unbounded version of strings, and that makes manipulation thereof
> > rather slow... it could likely have been done with normal strings a bit
> > faster but with a bit more effort and "dancing around" the fixed nature
> > of string-lengths.
>
> I'd like to thank everyone that answered.
> For me, that have learnt (or trying to learn) some programming languages
> by myself, with no graduation in this area, C++ really sound cryptic.
> I always like to learn by example, and although Ada must be very well
> documented, it can be not obvious for me to solve some issues.

>
> My main doubt is the amount of aid I can get if I embark in the Ada ship.
> But I surely will give it a try.
> I've read a lot about the language and seen some books.
> Is it or is it not advisable for beginner like me to lear Ada from a 95
> book? I was thinking in the Ada 95: The Craft of Object-Oriented
> Programming book.
> Any other recommended book? The Programming in Ada 2005 seems expensive
> just to try the language.

>
> Free resources from the Internet don't seem to include much howtos or
> guides of the 2005 specification.
> Or did I miss something?
>
> Luis


I have been developing database applications using Ada and as well as
web applications with AWS for some time now. I would say (with my
experience), Ada is a good choice. I can't imagine it if I develop
those monster program in anything else all by myself.

I develop and deploy on Mac OS X.

Get the John Barnes' book. It is sort like a bible of Ada.
--
Adrian Hoe
http://adrianhoe.com

Thomas Løcke

unread,
Mar 1, 2011, 3:29:33 AM3/1/11
to
On 2011-03-01 09:10, Adrian Hoe wrote:
> I have been developing database applications using Ada and as well as
> web applications with AWS for some time now. I would say (with my
> experience), Ada is a good choice. I can't imagine it if I develop
> those monster program in anything else all by myself.


I'm in the process of moving away from Apache/PHP/XSLT to AWS. So far
things are progressing very nicely; at least I haven't encountered any
real show-stoppers yet.

Do you have any advice to share? I have ~500KLOC worth of PHP/XSLT that
I want to get rid of. :o)

Oh, and your website dumps a 404 error when I click any link, be it
archives, categories or menu.

--
Thomas Lųcke

Email: tl at ada-dk.org
Web: http//:ada-dk.org
http://identi.ca/thomaslocke

Adrian Hoe

unread,
Mar 4, 2011, 8:34:45 AM3/4/11
to
On Mar 1, 4:29 pm, Thomas Løcke <t...@ada-dk.org> wrote:
> On 2011-03-01 09:10, Adrian Hoe wrote:
>
> > I have been developing database applications using Ada and as well as
> > web applications with AWS for some time now. I would say (with my
> > experience), Ada is a good choice. I can't imagine it if I develop
> > those monster program in anything else all by myself.
>
> I'm in the process of moving away from Apache/PHP/XSLT to AWS. So far
> things are progressing very nicely; at least I haven't encountered any
> real show-stoppers yet.
>
> Do you have any advice to share? I have ~500KLOC worth of PHP/XSLT that
> I want to get rid of.  :o)


Goto the drawing board and redesign from scratch. It is worth the time
and effort. Ada/AWS is more readable and maintainable than PHP.


> Oh, and your website dumps a 404 error when I click any link, be it
> archives, categories or menu.

That must be permission error. My website was hacked some times ago.
Will look into it.

> --
> Thomas Løcke

0 new messages