Converting a small app from xHarbour to Harbour ?

276 views
Skip to first unread message

Mel_the_Snowbird

unread,
May 4, 2011, 7:36:18 PM5/4/11
to Harbour Users
HI:

I'm going to try to convert a small app from xHarbour to Harbour.

I'll show the .bc file below.

I have read over the 4- pages description of hbmk2 and (somewhat)
understand it.

But, I'd like a short bit of guidance on how to use it given the
following xyz.bc file for xHarbour (which I use as an argument to
xHarbour's hbmake.exe)

I have a recent build of Harbour (under MinGW452) installed and
working on my machine.

btw, I can compile and run the .prgs in the tests sub-dir with no
problem.

Thanks for some guidance

-Mel

********* xyz.bc *******
#BCC
VERSION=BCB.01
!ifndef CC_DIR
CC_DIR = $(MAKE_DIR)
!endif

!ifndef HB_DIR
HB_DIR = $(HARBOUR_DIR)
!endif

RECURSE= NO
COMPRESS = NO
EXTERNALLIB = NO
XFWH = NO
FILESTOADD = 4
WARNINGLEVEL = 0
USERDEFINE =
USERINCLUDE =
GUI = NO
MT = NO

PROJECT = \xyz\xyzinit.exe $(PR)
OBJFILES = \xyz\obj\xyz.obj //
\xyz\obj\XYZPROC.obj //
\xyz\obj\CHGFLDS.obj //
\xyz\obj\DBFSETUP.obj //
\xyz\obj\FILUTILS.obj //
\xyz\obj\RBLDIDX.obj //
\xyz\obj\SETUPFIL.obj //
\xyz\obj\xyzutils.obj //
\xyz\obj\setupclr.obj //
\xyz\obj\email.obj //
\xyz\obj\errorsys.obj //
$(OB)

PRGFILES = \xyz\source\xyz.prg //
\xyz\source\XYZPROC.PRG //
\xyz\source\CHGFLDS.PRG //
\xyz\source\DBFSETUP.PRG //
\xyz\source\FILUTILS.PRG //
\xyz\source\RBLDIDX.PRG //
\xyz\source\SETUPFIL.PRG //
\xyz\source\xyzutils.prg //
\xyz\source\setupclr.prg //
\xyz\source\email.prg //
\xyz\source\errorsys.prg //
$(PS)

OBJCFILES = $(OBC)
CFILES = $(CF)
RESFILES =
RESDEPEN =

LIBFILES = \xharbour\lib\rtl.lib //
\xharbour\lib\lang.lib //
\xharbour\lib\vm.lib //
\xharbour\lib\rdd.lib //
\xharbour\lib\macro.lib //
\xharbour\lib\pp.lib //
\xharbour\lib\dbfntx.lib //
\xharbour\lib\dbfcdx.lib //
\xharbour\lib\dbffpt.lib //
\xharbour\lib\common.lib //
\xharbour\lib\gtwin.lib //
\xharbour\lib\codepage.lib //
\xharbour\lib\ct.lib //
\xharbour\lib\tip.lib //
\xharbour\lib\pcrepos.lib //
\xharbour\lib\hsx.lib //
\xharbour\lib\hbsix.lib //
\xharbour\lib\libnf.lib //
\xharbour\lib\libmisc.lib //
\xharbour\lib\debug.lib //
\xharbour\lib\pcrepos.lib //
\xharbour\lib\what32.lib

EXTLIBFILES =
DEFFILE =

# removed the -p flag below HARBOURFLAGS = -I\xyz\source -a -v -p
HARBOURFLAGS = -I\xyz\source -n -m -a
CFLAG1 = -OS $(CFLAGS) -d -L$(HB_DIR)\lib;$(FWH)\lib -c -I\xyz
\source
CFLAG2 = -I$(HB_DIR)\include;$(CC_DIR)\include
RFLAGS =
LFLAGS = -L$(CC_DIR)\lib\obj;$(CC_DIR)\lib;$(HB_DIR)\lib -Gn -M -m -s -
Tpe -ap
IFLAGS =
LINKER = ilink32

ALLOBJ = c0x32.obj $(OBJFILES) $(OBJCFILES)
ALLRES = $(RESDEPEN)
ALLLIB = $(LIBFILES) import32.lib cw32.lib
.autodepend

#COMMANDS
.cpp.obj:
$(CC_DIR)\BIN\bcc32 $(CFLAG1) $(CFLAG2) -o$* $**

.c.obj:
$(CC_DIR)\BIN\bcc32 -I$(HB_DIR)\include $(CFLAG1) $(CFLAG2) -o$* $**

.prg.obj:
$(HB_DIR)\bin\harbour -n -go -I$(HB_DIR)\include $(HARBOURFLAGS) -o$*
$**

.rc.res:
$(CC_DIR)\BIN\brcc32 $(RFLAGS) $<

#BUILD

$(PROJECT): $(CFILES) $(OBJFILES) $(RESDEPEN) $(DEFFILE)
$(CC_DIR)\BIN\$(LINKER) @&&!
$(LFLAGS) +
$(ALLOBJ), +
$(PROJECT),, +
$(ALLLIB), +
$(DEFFILE), +
$(ALLRES)

***** end of xyz.bc *****

Pritpal Bedi

unread,
May 4, 2011, 9:57:16 PM5/4/11
to Harbour Users
Hi

>   I'll show the .bc file below.

>    btw, I can compile and run the .prgs in the tests sub-dir with no
> problem.


xyz.hbp
=====

# You must be using this gt because of what32
-gtwvt
--w0
-st

-oxyzinit

\xyz\source\xyz.prg
\xyz\source\XYZPROC.PRG
\xyz\source\CHGFLDS.PRG
\xyz\source\DBFSETUP.PRG
\xyz\source\FILUTILS.PRG
\xyz\source\RBLDIDX.PRG
\xyz\source\SETUPFIL.PRG
\xyz\source\xyzutils.prg
\xyz\source\setupclr.prg
\xyz\source\email.prg
\xyz\source\errorsys.prg


-L\xharbour\lib
-lwhat32



Pritpal Bedi

Mel_the_Snowbird

unread,
May 4, 2011, 10:56:30 PM5/4/11
to Harbour Users
Hi Pritpal:

Thank you for converting my xyz.bc file into a .hbp file.

I'm into the compiling phase and found three compiling errors
between xHarbour and Harbour:

1. The REGEX command 'LIKE' had to be changed -- did that, works
now

2. The command 'SET ERRORLOG TO "c:\myerrorlog.txt" 'command is
(somehow) wrong ??

3. The AT Command (3 args) in xHarbour: nPOS :=
at("DEF","ABCDEFGHIXYZ",nOLDPOS) is wrong ??

So, I can't go farther into these experiments until I cure the compile
errors. Of course, these errors do not occur in xHarbour.

Thanks,
-Mel

Viktor Szakáts

unread,
May 5, 2011, 10:55:45 AM5/5/11
to Harbour Users
Hi Mel,

> I'm into the compiling phase and found three compiling errors
> between xHarbour and Harbour:
>
> 1. The REGEX command 'LIKE' had to be changed -- did that, works
> now
>
> 2. The command 'SET ERRORLOG TO "c:\myerrorlog.txt" 'command is
> (somehow) wrong ??

Replace with:
xhb_errorlog( "c:\myerrorlog.txt" )

Requires Harbour r16717

> 3. The AT Command (3 args) in xHarbour: nPOS :=
> at("DEF","ABCDEFGHIXYZ",nOLDPOS) is wrong ??

For the rest, add '#include "xhb.ch"' to your sources
and 'xhb.hbc' to your hbmk2 cmdline.

Viktor

Mel_the_Snowbird

unread,
May 5, 2011, 11:58:39 AM5/5/11
to Harbour Users
Hi Viktor:

Thank you. I'll try those fixes below out today.

-Mel

Mel_the_Snowbird

unread,
May 5, 2011, 11:58:01 AM5/5/11
to Harbour Users
Hi Viktor:



On May 5, 8:55 am, Viktor Szakáts <harbour...@syenar.hu> wrote:

Mel_the_Snowbird

unread,
May 6, 2011, 11:16:03 AM5/6/11
to Harbour Users
Hi:
I've been working on conversion of my small app for the last day.
I've been cutting it down, and attempting to make it a clean small
executable:

But, I must be doing sopmething wildly wrong because: My harbour
version is almost exactly *twice* as big as my xHarbour version:

xHarbour version: 1,089,536 bytes Harbour Version: 2,174,048
bytes !

I have included my Build Batch file below along with the .HBP
script I used for your perusal:

btw, the log file shows a correct build. I used the executable in a
production mode for a few minutes, and it seemed to pefrform properly.
Then I reverted to the original xHarbour executable while I puzzle
with this size problem.

another btw: Where *are* the object modules so I can examine their
size. It appears that the sub-dir where they are place is .hbmk/
etcetera. Very confusing and hidden.



Thanks for your comments to help me trim this executable down to a
reasonable size.

-Mel

=============== the two files ========
--- start of build batch file : bldcgi.bat ---
@echo off
REM Have to Ensure that MinGW Compiler is on Path
SET _PATH=%PATH%
SET PATH=\MINGW452\BIN;%PATH%;
\HARBOUR\BIN\HBMK2 XHB.HBC CGI.HBP > LOG.TXT 2>&1
SET PATH=%_PATH%
...
...
---- end of build batch file : bldcgi.bat ---

=================================================

--- start of cgi.hbp -----

#cgi.hbp
#=====
#
# Mods by Snowbird on May 6/11
-warn=def
-std
-lxhb
-rebuildall
# end of Mods by Snowbird


-ocgiinit

\cgi\source\hbcgi.prg
\cgi\source\hbCGIPROC.PRG
\cgi\source\hbFILUTILS.PRG
\cgi\source\hbSETUPFIL.PRG
\cgi\source\hbcgiutils.prg
\cgi\source\hbsetupclr.prg
\cgi\source\xhberr.prg

--------- end of cgi.hbp ----


Viktor Szakáts

unread,
May 6, 2011, 12:07:09 PM5/6/11
to Harbour Users
Hi Mel,

Mel_the_Snowbird wrote:
> Hi:
> I've been working on conversion of my small app for the last day.
> I've been cutting it down, and attempting to make it a clean small
> executable:
>
> But, I must be doing sopmething wildly wrong because: My harbour
> version is almost exactly *twice* as big as my xHarbour version:
>
> xHarbour version: 1,089,536 bytes Harbour Version: 2,174,048
> bytes !

This has been answered just recently, you may
want to look up the thread here.

Shortly:
- use -strip hbmk2 option.
- mingw (and msvc, and all modern C compilers)
create larger executables than bcc, and this is
one way these executables can be much faster.
See more by looking up: 'executable size vs speed'
You can tweak it by overriding C/Harbour compiler options
when building Harbour, though probably it will be difficult to
find a better balance than the default.
- use -compr=max hbmk2 option to reduce disk usage.
for this you will need upx compressor present in PATH.
- on the longer term you may want to start using native Harbour core
functionality to avoid the slight "ballast" that xhb lib may cause.

> another btw: Where *are* the object modules so I can examine their
> size. It appears that the sub-dir where they are place is .hbmk/
> etcetera. Very confusing and hidden.

I don't agree with confusing. They _have to be_ somewhere, right?
(somewhere which won't collide with other projects and other targets
of this project)

You can override default working dir using
-workdir= hbmk2 option, though in general I don't recommend it.

> SET _PATH=%PATH%
> SET PATH=\MINGW452\BIN;%PATH%;
> \HARBOUR\BIN\HBMK2 XHB.HBC CGI.HBP > LOG.TXT 2>&1
> SET PATH=%_PATH%

Perfect.

> #cgi.hbp
> #=====
> #
> # Mods by Snowbird on May 6/11
> -warn=def

Only needed if you use .c input files in
your project.

> -std

Not necessary, it's the default. For all core GTs
and most 3rd party libs, this will be set automatically.

> -lxhb

Fine, but xhb.hbc might be better in general.

> -rebuildall

Putting this in .hbp file is not very useful.

If you always want to rebuild, simply don't use
-inc (incremental mode), which is also the default.
BTW, if you don't use -inc mode, '.hbmk' dir won't
be created. (as a consequence you won't be able
to inspect objects at all)

If you want to use -inc mode (which is BTW quite
nice), you'll very rarely need -rebuild option, and
these cases you can add it directly to hbmk2 cmdline.

> \cgi\source\hbcgi.prg
> \cgi\source\hbCGIPROC.PRG
> \cgi\source\hbFILUTILS.PRG
> \cgi\source\hbSETUPFIL.PRG
> \cgi\source\hbcgiutils.prg
> \cgi\source\hbsetupclr.prg
> \cgi\source\xhberr.prg

I wouldn't recommend starting filename with a
backslash, they ruin automatic rebase of input files
by hbmk2. Use hard-coded leading pathsep only if
you do want to indicate absolute root dir.

Viktor

Mel_the_Snowbird

unread,
May 6, 2011, 3:11:55 PM5/6/11
to Harbour Users


On May 6, 10:07 am, Viktor Szakáts <harbour...@syenar.hu> wrote:
> Hi Mel,
> >       xHarbour version: 1,089,536 bytes    Harbour Version: 2,174,048
> > bytes   !
>
> This has been answered just recently, you may
> want to look up the thread here.

I've just been using this thread for about a week. Could not find
the post relating executable to size (used the search box at the top
with no result except your post above)

>
> Shortly:
> - use -strip hbmk2 option.

I tried this. It brought the size from 2,174,048 down to 1,884,160

> - mingw (and msvc, and all modern C compilers)
>   create larger executables than bcc, and this is
>   one way these executables can be much faster.
>   See more by looking up: 'executable size vs speed'

My concern is that my executable is really a 'cgi' executable
(called up by Apache). So, if 50 people arrive at once, then I'm
using much more memory than the BCC version.

Also, Prezemek said I should *not* use compression (via UPX)
because the loading/unravelling of the executable would be a lot
longer. So, I leave my current BCC executable uncompressed.


>   You can tweak it by overriding C/Harbour compiler options
>   when building Harbour, though probably it will be difficult to
>   find a better balance than the default.
> - use -compr=max hbmk2 option to reduce disk usage.
>   for this you will need upx compressor present in PATH.

answered above.

Mel_the_Snowbird

unread,
May 6, 2011, 3:23:48 PM5/6/11
to Harbour Users
I hit the <tab> and <enter> again and screwed up the post :((( (This
is SO different from posting on thhe comp.lanf.xharbour group.

> > I don't agree with confusing. They _have to be_ somewhere, right?

Yes, if I could only find them: My log file *stops* after the
final harbour compile to .c. It shows *no* C compile, and NO Link.
But It *does* complete with the big executable and the executable runs
properly.

I found that my C:\TMP\ sub-dif has some hbmk stuff buried very deep
in it.

I used -rebuildall with a sub-dir called c:\cgi\hbobj\ and 'lo and
behold' the .o and .c files showed up there.


> > (somewhere which won't collide with other projects and other targets
> > of this project)
>
> > You can override default working dir using
> > -workdir= hbmk2 option, though in general I don't recommend it.

Posted above, but why don't you recommend it ??

>> > If you always want to rebuild, simply don't use
> > -inc (incremental mode), which is also the default.

How do you *not* use -inc, thyere is no -noinc option in the
documentation

> > BTW, if you don't use -inc mode, '.hbmk' dir won't
> > be created. (as a consequence you won't be able
> > to inspect objects at all)

Well. I couldn't find them anyway.

>
> > If you want to use -inc mode (which is BTW quite
> > nice), you'll very rarely need -rebuild option, and
> > these cases you can add it directly to hbmk2 cmdline.
>
> > > \cgi\source\hbcgi.prg
> > > \cgi\source\hbCGIPROC.PRG
> > > \cgi\source\hbFILUTILS.PRG
> > > \cgi\source\hbSETUPFIL.PRG
> > > \cgi\source\hbcgiutils.prg
> > > \cgi\source\hbsetupclr.prg
> > > \cgi\source\xhberr.prg
>
I removed the \cg\source\ from the above files. It worked
properly.

Thank you.

Please excuse my using the <Tab><enter> and posting when I wasn't
ready. (How can I stop this ?!?)


-Mel

Viktor Szakáts

unread,
May 6, 2011, 3:38:22 PM5/6/11
to Harbour Users
Hi Mel,

Mel_the_Snowbird wrote:
> On May 6, 10:07 am, Viktor Szakáts <harbour...@syenar.hu> wrote:
>
> My concern is that my executable is really a 'cgi' executable
> (called up by Apache). So, if 50 people arrive at once, then I'm
> using much more memory than the BCC version.

Use -shared hbmk2 option so your app gets linked against
harbour dynamic lib. In your scenario this will be the
most efficient, and much more efficient than any BCC build.

> Also, Prezemek said I should *not* use compression (via UPX)
> because the loading/unravelling of the executable would be a lot
> longer. So, I leave my current BCC executable uncompressed.

It's an option, I didn't know your context.

Viktor

Viktor Szakáts

unread,
May 6, 2011, 3:43:25 PM5/6/11
to Harbour Users

Mel_the_Snowbird wrote:
> I hit the <tab> and <enter> again and screwed up the post :((( (This
> is SO different from posting on thhe comp.lanf.xharbour group.
>
> > > I don't agree with confusing. They _have to be_ somewhere, right?
>
> Yes, if I could only find them: My log file *stops* after the
> final harbour compile to .c. It shows *no* C compile, and NO Link.
> But It *does* complete with the big executable and the executable runs
> properly.

It works as expected. Details are much irrelevant once you
start using these tools in production.

If you need details, turn on extra info with -info, and turn
on -trace to see all the details.

> > > You can override default working dir using
> > > -workdir= hbmk2 option, though in general I don't recommendit.
>
> Posted above, but why don't you recommend it ??

It's easy to screw it up unless you're certain what
you're doing. Default will take of not colliding when
doing various different target types. It's a detail it's
not worth messing with, though if your want, you can.

> >> > If you always want to rebuild, simply don't use
> > > -inc (incremental mode), which is also the default.
>
> How do you *not* use -inc, thyere is no -noinc option in the
> documentation

It's the default. You always turn on incremental explicitly
with -inc option.

> Please excuse my using the <Tab><enter> and posting when I wasn't
> ready. (How can I stop this ?!?)

I don't know, sorry.

Viktor

Chee Chong Hwa

unread,
May 7, 2011, 12:17:23 AM5/7/11
to harbou...@googlegroups.com
Hi Mel


On Sat, May 7, 2011 at 3:11 AM, Mel_the_Snowbird <meds...@aol.com> wrote:


On May 6, 10:07 am, Viktor Szakáts <harbour...@syenar.hu> wrote:
> Hi Mel,
> >       xHarbour version: 1,089,536 bytes    Harbour Version: 2,174,048
> > bytes   !
>
> This has been answered just recently, you may
> want to look up the thread here.

  I've just been using this thread for about a week. Could not find
the post relating executable to size (used the search box at the top
with no result except your post above)

CCH : Try http://cch4clipper.blogspot.com/2011/03/does-smaller-exe-size-equates-to-better.html


Cheers


CCH
http://cch4clipper.blogspot.com

Klas Engwall

unread,
May 7, 2011, 10:04:39 AM5/7/11
to harbou...@googlegroups.com
Hi Mel,

Ignoring all the questions Viktor has already answered ...

> I hit the <tab> and <enter> again and screwed up the post :((( (This
> is SO different from posting on thhe comp.lanf.xharbour group.

[...]

> Please excuse my using the <Tab><enter> and posting when I wasn't
> ready. (How can I stop this ?!?)

You are posting online at Google Groups, right? You can configure your
Google Groups account to forward all messages to your email account (I
set up a separatate account for this). Then you can use your familiar
email client to read and write mailing list messages locally. In
Thunderbird I set up a separate profile in a separate directory to keep
the mailing list from mixing with all the other emails. It works
perfectly for me.

Regards,
Klas

Mel_the_Snowbird

unread,
May 8, 2011, 1:32:41 AM5/8/11
to Harbour Users
CCH said:


> CCH : Tryhttp://cch4clipper.blogspot.com/2011/03/does-smaller-exe-size-equates...

Hi:
I'm still concerned about the *load* time of my CGI executable
where there may be (for example) 50 clients arriving at the same time,
and thus 50 loads being carried on at the same time. I believe that
the loading time for a double-sized executable must somehow detract
not only from memory usage, but also from the total time taken to
complete the request.

In the matter of CGI executables, there is usually not much
looping, or computing. Its just usually examining passwords, building
a page, and sending out the door to Apache to get back to the browser.

A few days ago, I 'timed' my xHarbour exe against my nearly
identical Harbour executable. The result was they *both* took 0.1563
seconds (i.e., approx 1/6th of a second) to service a client browser
request from the start of execution until control was passed back to
Apache (databases were opened and seek'ed, etc). I don't know how long
the 'load' time was.

Anyway, its an interesting process to think about.

Thank you.

-Mel

Chee Chong Hwa

unread,
May 8, 2011, 9:58:38 AM5/8/11
to harbou...@googlegroups.com
Hi Mel

Based on your scenario, I would think a smaller EXE should be more practical. So using bcc as the compiler would be the logical choice over mingw.
If you are already using HMG Exxtended, the default compiler is bcc.

Cheers

CCH
http://cch4clipper.blogspot.com

On Sun, May 8, 2011 at 1:32 PM, Mel_the_Snowbird <meds...@aol.com> wrote:
CCH said:


> CCH : Try http://cch4clipper.blogspot.com/2011/03/does-smaller-exe-size-equates...

--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

Viktor Szakáts

unread,
May 8, 2011, 12:30:52 PM5/8/11
to Harbour Users
> Based on your scenario, I would think a smaller EXE should be more
> practical. So using bcc as the compiler would be the logical choice over
> mingw.
> If you are already using HMG Exxtended, the default compiler is bcc.

BCC is absolutely not needed here.

First, the load time is negligible (confirmed by Mel's measurements)
since the .exe will be loaded from cache memory, second, if load time
and memory consumption is of any concern, -shared option is the
answer, as I wrote previously. This will cause that all Harbour core
code will be loaded only once, and only the actual CGI code will be
loaded for each session.

Viktor

Mel_the_Snowbird

unread,
May 8, 2011, 4:18:07 PM5/8/11
to Harbour Users
Viktor said:

> BCC is absolutely not needed here.
>
> First, the load time is negligible (confirmed by Mel's measurements)
> since the .exe will be loaded from cache memory, second, if load time
> and memory consumption is of any concern, -shared option is the
> answer, as I wrote previously. This will cause that all Harbour core
> code will be loaded only once, and only the actual CGI code will be
> loaded for each session.

Hi Viktor:

Yes, I read your post and tried the -shared option, and found that
the executable had dropped dramatically to 201,216 bytes (from the
'stripped' version of 1,888,nnn bytes). A *big* drop.

My build process (and batch file) normally send the resulting
executable to my Apache Server machine (on my local intranet) and
place it in ...\ cgi-bin\xxx\ sub-dir, where the executable is
immediately active and waiting for the next page request from client
browsers.

However, in the '-shared' case, I don't know what/where/how to get
the 'remainder of the harbour core code' to the server and where to
put it.

Rather than exposing my stupidity, would you mind telling me *what*
and *where* the 'Harbour Core Code' is, and where I should put it so
that my small-size cgi executable will be able to use it.

Also, if I *change* the core code, how to I 'unload' the core code
and then install (??) the new core code ?

btw, the cpu measurements I made (thru the secondsCPU() function)
on both machines seemed to alternate between 0.00 seconds to 0.1563
seconds on *both* versions. I used the default argument to the
function. I tried to get an average of the times, but I steadily got
0.00 or 0.1563 seconds ? Perhaps I can simulate some more extensive
cpu work (with a loop or whatever) to test this before I exit my cgi
app and return to Apache

Thank you.

-Mel

Viktor Szakáts

unread,
May 8, 2011, 11:42:29 PM5/8/11
to Harbour Users
Hi Mel,

> Rather than exposing my stupidity, would you mind telling me *what*
> and *where* the 'Harbour Core Code' is, and where I should put it so
> that my small-size cgi executable will be able to use it.

The .dll is called harbour-21.dll for current 32-bit Windows
builds. It's one file only and you need to copy it next to the
.exe(s) (or anywhere in server PATH).

> Also, if I *change* the core code, how to I 'unload' the core code
> and then install (??) the new core code ?

The .dll is unloaded if no loaded .exe refers to it. So the
same applies to it as to the .exe.

[ BTW, if you use unmodified Harbour source (which I
highly recommend) you only need to update the .dll if
there was any incompatible change in nightly Harbour
code or you want to use some freshly added feature.
For stable Harbour releases you only need to update the
.dll between major and minor version updates, f.e from
1.0 to 2.0 or 2.0 to 2.1, but not from 2.1.0 and 2.1.1. ]

> on both machines seemed to alternate between 0.00 seconds to 0.1563
> seconds on *both* versions. I used the default argument to the
> function. I tried to get an average of the times, but I steadily got
> 0.00 or 0.1563 seconds ? Perhaps I can simulate some more extensive
> cpu work (with a loop or whatever) to test this before I exit my cgi
> app and return to Apache

I'm not an expert in measuring CGI load time, though
as it seems the load time will most probably be negligible in
any scenario here (there is no disk read involved and exe
size is pretty low in any case). Maybe it will be more
interesting to measure peak/average memory consumption
by these CGI processes in total.

Another big win when using .dll is when you have multiple
CGI .exes.

Viktor

Mel_the_Snowbird

unread,
May 9, 2011, 10:47:24 AM5/9/11
to Harbour Users
Viktor said:


> The .dll is called harbour-21.dll for current 32-bit Windows
> builds. It's one file only and you need to copy it next to the
> .exe(s) (or anywhere in server PATH).
>

I'll try that later today !

>
> The .dll is unloaded if no loaded .exe refers to it. So the
> same applies to it as to the .exe.
>

Great !


> [ BTW, if you use unmodified Harbour source (which I
> highly recommend) you only need to update the .dll if
> there was any incompatible change in nightly Harbour
> code or you want to use some freshly added feature.
> For stable Harbour releases you only need to update the
> .dll between major and minor version updates, f.e from
> 1.0 to 2.0 or 2.0 to 2.1, but not from 2.1.0 and 2.1.1. ]
>
>
> I'm not an expert in measuring CGI load time, though
> as it seems the load time will most probably be negligible in
> any scenario here (there is no disk read involved and exe
> size is pretty low in any case). Maybe it will be more
> interesting to measure peak/average memory consumption
> by these CGI processes in total.
>
> Another big win when using .dll is when you have multiple
> CGI .exes.

Yes, I run three web sites from my server -- all currently using
xHarbour. But, I've been testing the 'small' site with Harbour. That
is the one I'll start with !

Thanks for your guidance. I'll let you know when I've got my cgi app
ready to run under harbour with a .dll.

-Mel

Qatan

unread,
May 9, 2011, 11:22:43 AM5/9/11
to harbou...@googlegroups.com
Hello Mel,

> Yes, I run three web sites from my server -- all currently using
> xHarbour. But, I've been testing the 'small' site with Harbour. That
> is the one I'll start with !
>
> Thanks for your guidance. I'll let you know when I've got my cgi app
> ready to run under harbour with a .dll.

Seems interesting...
I would like to see a working example, if possible, of course.
Thank you.

Qatan


Mel_the_Snowbird

unread,
May 9, 2011, 2:39:13 PM5/9/11
to Harbour Users
HI Viktor and Qatan

I completed the cgi speed test on my system a few minutes ago.

I built my CGI app under xHarbour with BCC 5.5.1, and also with a
nearlycurrent version of Harbour under MinGW 4.5.2

In summary (and surprisingly) the Harbour times were *slower* than
the xHarbour times: Harbour approx 7 seconds, xHarbour approx 5 secs
for a one to a million loop building a *long* string of nearly 6
megabytes


Here are those results

**************************

//
// nSTSECS was inited by hb_secondscpu() at top of program

// here is the stressful loop and the string building test

// This next is line 569 in this CGI app
cTEST := "" // A LOCAL vrbl
FOR I = 1 TO 1000000 // A LOCAL vrbl
cTEST += NTOC(I) // A LOCAL vrbl with a function call
// to Convert a Number to a string, and
// catenate the number to another string
NEXT
CGIOUT cFORM // send the form to Apache to get it out the door

CLOSE DATABASES // I had two databases opened above here

// these next lines are basically speed computations

nNDSECS := HB_SECONDSCPU() // measure how long till now

memowrit("\cgi\logs\showtime.txt","Length of vrbl cTest=
"+NTOC(len(cTEST))+" (CPU Time: "+STR(nNDSECS-nSTSECS,8,5))

// For Harbour and xHarbour, the length of the vrbl was 5,888,896
bytes
// For Harbour the CPU Time was: 7.35938 seconds (only checked
once)
// For xHarbour the CPU time was: 5.31250 (and also 5.32813)
seconds

// The CPU: Pentium (R) 4 1.60 GHz in an older Dell Desktop

RETURN NIL // return to Apache 2.2.10

Viktor Szakáts

unread,
May 9, 2011, 3:03:45 PM5/9/11
to Harbour Users
Hi Mel,

It's pretty hard to advise anything. First you need to decide
what is more important, speed or size (memory consumption).

Lately, we've been (or at least I have been) talking about
memory consumption. Which is supposed to be eased
by -shared mode. At the same time -shared mode has
at least two kinds of overhead compared to -static:
1) it has to load the .dll (from cache, but anyway), it
has to bind it, etc
2) harbour .dll is by default built in MT, and MT mode
has some overhead compared to ST.
+1) 2) on some platforms (*nix) it has extra overhead because of PIC
(position independent code) mode

Which means, if your goal is sheer speed with your number of
users, probably -static is a better option ATM.

[ If you wish to optimize on both, you may try generating
ST harbour .dll, though for this I'll first commit a change
to revert this option to usable state. ]

Viktor

Viktor Szakáts

unread,
May 9, 2011, 3:48:52 PM5/9/11
to Harbour Users
> [ If you wish to optimize on both, you may try generating
> ST harbour .dll, though for this I'll first commit a change
> to revert this option to usable state. ]

With Harbour r16745, it's possible to force and ST harbour .dll
by using this setting and rebuilding Harbour:
__HB_BUILD_DYN_ST=yes

[ I didn't test it, and it's undocumented setting, so use it
at your own risk. ]

Viktor

Przemysław Czerpak

unread,
May 9, 2011, 4:31:37 PM5/9/11
to harbou...@googlegroups.com
On Mon, 09 May 2011, Viktor Szakáts wrote:

Hi Mel and Viktor,

> It's pretty hard to advise anything. First you need to decide
> what is more important, speed or size (memory consumption).

In this particular case the test measure the speed of NTOC() function
which consume over 95% of CPU time. Both compilers uses nearly the same PRG
implementation of this function which is also rather inefficient.
It can be improved very strongly by simple rewriting in C. Even small
improvements at PRG level gives ~20% time reduction.
The xHarbour version is a little bit shorter because it's not fully
CT3 compatible. The Harbour version has few fixes which added additional
code and reduced the speed. The second reason of speed reduction is overhead
caused by MT HVM which is now default in Harbour shared (harbour*.dll) library.
Both compilers optimize += operator for strings so this operation costs nearly
nothing though here please remember that Harbour optimizes all
<exp1> += <exp2>
operations but xHarbour only expression where <exp1> is LOCAL variable or indexed
array. Try tests/speedstr.prg to see it in real life.
This test can also show you the overhead in current Harbour versions produced by
MT mode and -shared switch.
Mel, if you replace in your test NTOC() with STR() then you will eliminate overhead
produced by this functions. If you also link your code statically then you will eliminate
overhead introduced by MT HVM. This should give more realistic results for comparison.

BTW does anyone want to rewrite NTOC() and corresponding functions in C?

> Lately, we've been (or at least I have been) talking about
> memory consumption. Which is supposed to be eased
> by -shared mode. At the same time -shared mode has
> at least two kinds of overhead compared to -static:
> 1) it has to load the .dll (from cache, but anyway), it
> has to bind it, etc
> 2) harbour .dll is by default built in MT, and MT mode
> has some overhead compared to ST.
> +1) 2) on some platforms (*nix) it has extra overhead because of PIC
> (position independent code) mode
>
> Which means, if your goal is sheer speed with your number of
> users, probably -static is a better option ATM.

Programs linked dynamically register in HVM all functions present in harbour
shared library. This registration also takes some time so if the same application
is executed thousands of times then summary overhead is noticeable. BTW I'll improve
startup registration code ASAP but still static binaries will have less symbol to
activate so they will be faster.

> [ If you wish to optimize on both, you may try generating
> ST harbour .dll, though for this I'll first commit a change
> to revert this option to usable state. ]

Viktor, I would like to ask you to restore support for different HVMs in shared libraries.
I plan to add support for serialized threads like in xBase++ or PYTHON.
It will give item write protection so it'll be save for internal HVM structures to write
to the same complex item by different HVM threads. It's not real MT mode but looks that
xBase++ and PHYTON users can leave with it and for sure it greatly helps to port existing
xBase++ code which does not use any protection to Harbour. Also some libraries which are
not MT ready will work correctly which such MT model in HVM.
Anyhow it means that we will have yet another HVM library and potentially harbour*.dll
for user who will want to use such HVM in shared mode.

best regards,
Przemek

Viktor Szakáts

unread,
May 9, 2011, 4:46:31 PM5/9/11
to Harbour Users
Hi Przemek,

Przemysław Czerpak wrote:
> > [ If you wish to optimize on both, you may try generating
> > ST harbour .dll, though for this I'll first commit a change
> > to revert this option to usable state. ]
>
> Viktor, I would like to ask you to restore support for different HVMs in shared libraries.

I maintain my objection, for reasons details in my
original e-mail(s). Shortly: Multiple .dlls are practically
unsupportable once you start building .dll versions of
other libs.

IMO -shared mode is not for speed.

Anyhow I've just made a commit which enables all
dynlib combinations with internal build switches, so
for anyone feeling like experimenting, it's possible.
(hbmk2 however only supports one flavor, and this
is intentional)

> I plan to add support for serialized threads like in xBase++ or PYTHON.
> It will give item write protection so it'll be save for internal HVM structures to write
> to the same complex item by different HVM threads. It's not real MT mode but looks that
> xBase++ and PHYTON users can leave with it and for sure it greatly helps to port existing
> xBase++ code which does not use any protection to Harbour. Also some libraries which are
> not MT ready will work correctly which such MT model in HVM.
> Anyhow it means that we will have yet another HVM library and potentially harbour*.dll
> for user who will want to use such HVM in shared mode.

:( I'm not happy, to say the least. I'm _very strongly_
(can't emphasis enough) against introducing huge
build-time diversity in code behavior. It has many downsides.
(build-time, impossible to test, manage, etc)

Can it be made dynamic option? Can it be made contrib/3rd
party option?

[ Besides, I don't really understand what does it give to
Harbour programmers, but that's just a minor aside. ]

Viktor

Mel_the_Snowbird

unread,
May 9, 2011, 5:29:20 PM5/9/11
to Harbour Users
Viktor & Przemek:

I'm going to wait for awhile before I do anything dramatic.

I have reverted my small web site (www.whosaway.com) back to the
original xHarbour / BCC5.5.1 version for awhile, and I'll watch / lurk
here every day or so, but I have to get back to my other tasks that I
have put off while doing this interesting Harbour installation and
testing.

Thanks for helping me understand a lot more about Harbour (and I'm
sorry for 'shit-disturbing'. My intentions are good ! )


-Mel

Reply all
Reply to author
Forward
0 new messages