On 25 Wrz, 21:44, "Enrico Maria Giordano"
<
e.m.giord...@emagsoftware.it> wrote:
> > This code does not use indexes.
> Yes, my fault. The real test I'm using *do use* indexes but it's lost in
> copy/paste operation. So I change the sample:
[...]
If you need serious results then you should create self contain
example which creates tables, indexes and make some skip
operation with more or less aggressive filters.
In the past I posted some of such test code.
> > If you want to test network overhead then you can start with
> > HBNETIO
> My friend Maurizio Ghirardini already tested it and told me that he get very
> little speed improvement. Maurizio? Can you give us more details?
You should not expect noticeable speed difference because it's
physically
exactly the same number of IO operations. If you want to see something
like that then you can stop your investigation here. It's technically
impossible
because over 95% of time is consumed by IO operations if you access
files
by network. It doesn't matter what language you are using. Speed
improvement
can be only reach by reducing number of IO operation by moving to
other RDDs
which does not use network file IO operations or by introducing some
type
of read ahead buffers which looks nicely in tests but cannot be used
in
concurrent environment because they cause data corruption. Some of
such
read ahead features are supported by network drivers, i.e. the
infamous
opportunistic locks. It's very simple mechanism:
first user open file in pseudo shared mode - in practice use it like
in exclusive
mode reading data in very big peaces and buffering updates. It works
really
fast. In practice with comparable speed to files open in exclusive
mode.
When the second client open the same files then server should block
this operation for a while in which it sends information to the first
client
that it should send to the server all data written to file and
buffered on the
client side and then discard all its local caches because now
concurrent
access is enabled and due to other potential writers it's illegal to
use buffered
read data because it may not be updated. If all is well implemented
then
modified buffers are correctly save to files on the server and read
ahead
buffers are disabled so file access is safe for concurrent access but
suddenly
everything begins to work muuuuch slower. And this is expected
behavior.
You should be very concerned if you cannot observe such speed
reduction
because it means that opportunistic locks were not correctly disabled
and
so files cannot be safely used concurrently in RW mode by more then
one
client. It happens quite often in MS network drivers and it's the
reason why
we have to disable opportunistic locks in MS-Windows to eliminate file
corruption.
It's also known that in some cases even if you made modifications in
registry some operations are still buffered on the client side
improving
the speed results but with race condition which sooner or later causes
data corruption when two clients begin to update the same region using
it's own caches instead of real file body from the server. Due to
nature
of internal updates this race condition is the most danger for indexes
and usually they are corrupted first. Sometimes with small number of
updates such installations can work for without corruptions for few
days
or even week - they are lucky men - but of course anyone who seriously
think about his job must not make sth like that in production
environment.
I hope that now you have some basic knowledge about network driver
jobs and opportunistic locks. In fact there are also some more
complicated
improvements, different level of caches, special file access mode and
locks, etc. which can improve a little bit performance. Anyhow non of
them
can give you noticeable speed improvement with _SAFE_ concurrent file
IO RW access.
I asked you to make test with HBNETIO because it's very basic network
driver working on TCP connections dedicated for file IO operations so
it's overhead is minimal. Using it you could be also sure that non of
unsafe read ahead trick is still enabled in some MS network driver
layers. It means that HBNETIO results should show real network
performance - everything noticeable slower has some configuration
problems and should be reconfigured or other network drivers should
be used just like everything noticeable faster when more then one
client
open concurrently the same files in RW mode is not safe and sooner or
later will cause data corruption.
So you can check how close to HBNETIO your installations are.
If you need sth faster then you have to reduce number of IO
operations.
You can do that using dedicated remote RDD which does not operate
on files but sends complex request to server in single IO operation
and
server executes it giving the answer also in single IO (of course if
possible,
if you want to read record which is larger then data buffer in
ethernet frame
then it will have to be send in few frames though for upper level
protocols
like TCP it will look like single peace of data)
You can make some test with ADS, MEDIATOR or LetoDB. They
are classic remote RDDs. SQLRDD is also type of remote RDD though
unlike previous ones it does not use its own dedicated server but
tries to connect to RDBMS directly - it's a little bit less efficient
way
which introduce some limitation and incompatibilities to standard
Clipper/[x]Harbour/xbase++ RDDs behavior caused by RDBMS
client API.
As next step you can move your application to the server. If necessary
then you can divide it into client and server parts.
> > But real speed improvement is moving whole application to
> > the server side.
> I agree. But unfortunately there are some serious problems in doing so. As
> an example, what if an application uses a component (say, the email client
> or the word processor)? It would run the remote one, not the local. Any
> solutions? Or am I wrong?
RPC is the answer. You can use HBNETIO as RPC server not only
file server. BTW In Harbour SVN I committed some OLE server example
which allow to use HBNETIO RPC from any other language using OLE
interface.
For my own use I created GTNET library which allows to easy write own
terminal
servers with embedded RPC so my code is also on server and client
side.
I have full control of the GTNET client application from the server
accessing
local files, printers, COM ports just like the one on the server
without any
problem. And of course I use programs on client side, i.e. when I'm
presenting
some HTML reports then client automatically activates system www
browser.
The same is for product pictures, interactive e-mails, etc.
Of course locally open programs are not stored on the server side so
if user
turnoff his computer or run attach current session to other one then
only
the server application is switched to new connection.
best regards,
Przemek