Performance question

25 views
Skip to first unread message

Gary

unread,
Aug 3, 2009, 4:15:15 PM8/3/09
to OpenQM-OpenSource
I have a "medium-size" file with 65000 records in it. Each record has
7 fields and the average record size is about 36 characters. One of
the keys is called "TERM.KEY" and it is 5 characters long. When I did
"MAKE.INDEX DOCREC TERM.KEY", it took about 5 seconds to create the
alternate index. However, if I do a "SELECT DOCREC BY TERM.KEY", it
takes almost a full minute to return! Why should it take so long to
create a select list this way? Is there a better method I should use
to get my records delivered in "TERM.KEY" sequence?

Thanks,

GaryW

Ashley Chapman

unread,
Aug 4, 2009, 5:40:39 AM8/4/09
to openqm-o...@googlegroups.com


2009/8/3 Gary <gwal...@gmail.com>

I was wondering when you would hit something like this. :-)

It seems that all databases have performance issues at some time.  With Oracle, you can use EXPLAIN PLAN to see what's going on behind the scenes.  Alas QM is not that sophisticated. 

Some things that might help:-

1. Increase the amount of memory available for sorting.  The SORTMEM parameter defaults to 1Mb.  Also the SORTMRG, SORTWORK and RECCACHE parameters can make a difference.

2. Try different CASE/NO.CASE settings

3. Use REQUIRE.INDEX so as to ascertain if the index is actually being used.

4. Consider making your own select list rather than relying upon SELECT.


HTH

Ashley


Brian Speirs

unread,
Aug 4, 2009, 3:33:33 PM8/4/09
to OpenQM-OpenSource
Let's make sure that you have done the basics first ....

Did you BUILD the index after you created it?

BUILD.INDEX filename ALL CONCURRENT

Cheers,

Brian

eppick77

unread,
Aug 4, 2009, 8:00:54 PM8/4/09
to OpenQM-OpenSource
One other things. Use the NO.NULLS option when creating the index.

CREATE.INDEX FILENAME NAME NO.NULLS

Eugene

Diccon

unread,
Aug 5, 2009, 7:24:38 AM8/5/09
to OpenQM-OpenSource
After going back on this particular issue I personally found it was
the SORTMEM in the qmconfig file caused the majority of the lag.
Kinda makes me feel better that so many others are familiar with it
though. (At the time Martin made me think I was the only one and it
took weeks to find out it was Sortmem)

All of the above are valid though.

Need to change the defaults, its driven towards the blasted PDA
version, Any standard PC can afford much more then 1 meg of shuffling
space, and when it overflows seems to cause massive degradation.

-Diccon

eppick77

unread,
Aug 5, 2009, 2:08:59 PM8/5/09
to OpenQM-OpenSource
Diccon,

Here is my config. I do have 8 gigs of ram on this machine.

CMDSTACK 999
DEADLOCK 0
DEBUG 12
DHCACHE 10
DUMPDIR /ssd/dump
ERRLOG 100 kb
EXCLREM 0
FILERULE 7
FLTDIFF 0.00000000002910
FSYNC 0
GDI 0
GRPSIZE 1
INTPREC 11
JNLMODE 0
JNLDIR
LPTRHIGH 66
LPTRWIDE 80
MAXCALL 10000
MAXIDLEN 70
MUSTLOCK 0
NETFILES 3
NUMFILES 1000
NUMLOCKS 500
OBJECTS 0 [No limit]
OBJMEM 0 [No limit]
PDUMP 0
QMCLIENT 0
RECCACHE 0
RINGWAIT 1
SAFEDIR 0
SH
SH1
SORTMEM 32768 kb
SORTMRG 4
SORTWORK /tmp
SPOOLER lpr
TEMPDIR /tmp
TERMINFO
YEARBASE 1930

Eugene

Diccon

unread,
Aug 7, 2009, 8:11:54 AM8/7/09
to OpenQM-OpenSource
Thanks Eugene.
Yeah, we have a deployed server that runs with 8Gig RAM, its a very
nice size for fast, accessible memory :) Weird to have RAM far larger
than my first hard drive, but I think others have more extreme
comparisons than me :)

Combine that with fast disks and its works a charm.

I'll change the defaults for the SORTMEM on the GPL release. Probably
ramp it up to 3 meg at least, since our target is servers not PDA's it
could save some grief. Ultimately need to find the reason the disk
cache causes such a massive performance hit with low sortmem. It
sounds like thrashing to me, but not sure. I've no idea if it's been
resolved on Commercial or not. Perhaps I should download and smash
test it later.

-Diccon

Martin Phillips

unread,
Aug 7, 2009, 9:01:57 AM8/7/09
to openqm-o...@googlegroups.com
Hi Diccon,

The values of SORTMEM and SORTMRG are very dependent on the relative speeds
of the processor and disk and also on available memory. In Eugene's case, he
is using Managed Flash Technology disks which are astonishingly fast.

I have just tried a range of values here when sorting a file with just under
2 million records. It looks like a SORTMEM value of 4096 gives me best
performance but this would probably vary depending on the data I am sorting
and what else is happening in the system at the time.

The sort system builds a simple binary tree. When this reaches the memory
usage set by SORTMEM, the tree is written to disk and we start again.
SORTMRG determines how many disk trees we merge in each pass. A value of 4
seems to work well.


Martin Phillips
Ladybridge Systems Ltd
17b Coldstream Lane, Hardingstone, Northampton, NN4 6DB
+44-(0)1604-709200

Gary

unread,
Aug 8, 2009, 10:11:36 PM8/8/09
to OpenQM-OpenSource
Thanks for all the suggestions. I am getting "erratic" behavior from
"SELECT ...BY". The same 63000 records sometimes sort in 5 seconds
and sometimes take nearly a minute. Also, I am getting strange results
with the "INDEX.REQUIRED" clause. Here is an example:

:SELECT DOCREC BY TERM.KEY REQUIRE.INDEX
Processing terminated: This query cannot be resolved with an index

but look at the following:

:LIST.INDEX DOCREC ALL
Alternate key indices for file DOCREC
Number of indices = 2

Index name...... En Type Nulls S/M Fmt Field/Expression
TERM.KEY Y I Yes S L FMT(TERMINAL,'R%4'):STATUS:

FMT(DOCUMENT.NO,'R%8')

KEY Y I Yes S L FMT(CO.NO,"R0%2"):

FMT(TYPE,"1L"):FMT(DOCUMENT.NO,"R0%8")

To me it looks like it should be using the index... Can anyone tell
me what I'm missing?

GaryW

Ashley Chapman

unread,
Aug 9, 2009, 1:09:36 AM8/9/09
to openqm-o...@googlegroups.com


2009/8/9 Gary <gwal...@gmail.com>


Looks like a dictionary/index mismatch, or it might be a bug in QM.

What's the dict  item for  DOCREC?



GaryW





--
Ashley Chapman
Message has been deleted
Message has been deleted

Gary

unread,
Aug 9, 2009, 8:43:16 AM8/9/09
to OpenQM-OpenSource
Ashley,

Here is the DICT for DOCREC:

LIST DICT
DOCREC Page 1
@ID......... TYPE LOC........... CONV.. NAME........ FORMAT S/M
ASSOC...
@ID D 0 DOCREC 11L S
CO.NO D 1 CO NO 2' 'R0 S
TYPE D 2 TYPE 1' 'L S
DOCUMENT.NO D 3 DOCUMENT NO 8' 'R0 S
TERMINAL D 4 TERMINAL 4' 'R0 S
STATUS D 5 STATUS 1' 'L S
TERM.KEY I FMT(TERMINAL,' TERMINAL KEY 13L S
R%4'):STATUS:F
MT(DOCUMENT.NO
,'R%8')
KEY I FMT(CO.NO,"R0% 11' 'L S
2"):FMT(TYPE,"
1L"):FMT(DOCUM
ENT.NO,"R0%8")

8 record(s) listed

I'm at a loss. Let me know what you think.

GaryW

Ashley Chapman

unread,
Aug 9, 2009, 8:47:59 AM8/9/09
to openqm-o...@googlegroups.com
It's been a bit mangled by email, but looks like the format might be causing a problem.

Make sure there are no leading or trailing spaces in any of the dictionary fields, and that the FORMAT is as simple as you can make it.  Perhaps try a 10L and 10R format for all the fields.

Yep, It's pretty inadequate to enter dictionary items using the editor.  There really should be a validating entry form for it.

I think there are a few bugs in this part of QM, which have been fixed in later versions of the commercial version.

2009/8/9 Gary <gwal...@gmail.com>

GaryW





--
Ashley Chapman

Gary

unread,
Aug 9, 2009, 9:20:00 AM8/9/09
to OpenQM-OpenSource
Ashley,

Thanks for the input. I'm afraid I can't get much simpler on the
formats for a couple of reasons:

1) I need to keep compatible with my existing ISAM files. The keys
are designed so that the sort order and item order will remain the
same as in the ISAM file when reading by key.

2) I use the dictionary now for screen and report layout. I certainly
can't have all my columns 10 characters wide.

I CAN change this temporarily to see if that is causing the error.
Nonetheless, sort times are all over the place.

SELECT DOCREC BY TERMINAL = 74 seconds

SELECT DOCREC BY DOCUMENT.NO = 3 seconds

SELECT DOCREC BY TERM.KEY = 3 seconds

SELECT DOCREC BY KEY - 3 seconds

It seems that there is a theme here. With DOCUMENT.NO and KEY, almost
all records will have unique keys. TERM and TERM.KEY have a large
amount of identical keys. This seems to KILL sort time. I don't know
why. Perhaps a bug?

GaryW

Gary

unread,
Aug 9, 2009, 9:31:28 AM8/9/09
to OpenQM-OpenSource
Ashley,

A coupld of comments on my previous post (DAMN GOOGLE, why can't I
edit these?)... I meant to say "TERMINAL" not "TERM" and I should
have pointed out that the formatting is probably NOT the problem as
the keys that sort quickly are formatted just like the keys that sort
poorly. It REALLY seems that the determining factor is the number of
identical keys.

GaryW

Gary

unread,
Aug 9, 2009, 9:36:15 AM8/9/09
to OpenQM-OpenSource
To all:

OK, this is definitely approaching "bug" status. I can "fix" the
sort problem by adding something arbitrary to the sort key. For
example,

"SELECT DOCREC BY TERMINAL" takes 74 seconds, BUT

"SELECT DOCREC BY TERMINAL BY DOCUMENT.NO" sorts in THREE!

Of course, there is no reason that I can't ALWAYS do the second as the
results are perfectly acceptable, but I SHOULDN'T HAVE TO.

GaryW

Ashley Chapman

unread,
Aug 9, 2009, 9:44:55 AM8/9/09
to openqm-o...@googlegroups.com


2009/8/9 Gary <gwal...@gmail.com>


To all:

   OK, this is definitely approaching "bug" status.  I can "fix" the
sort problem by adding something arbitrary to the sort key.  For
example,


It's not very good, but this is the sort of problem that I expect when there is a low level of uniqueness in an indexed field.  You get the same sort of thing with Oracle too.

Oddly enough, your query might actuall run much faster without the index.  this is one of the reasons, I try to write applications that do not directly manipulate the indexes.  That way, indexes can be added and removed by the DBA without breaking the app.
 


   "SELECT DOCREC BY TERMINAL" takes 74 seconds, BUT

   "SELECT DOCREC BY TERMINAL BY DOCUMENT.NO" sorts in THREE!

Of course, there is no reason that I can't ALWAYS do the second as the
results are perfectly acceptable, but I SHOULDN'T HAVE TO.

GaryW


On Aug 9, 9:31 am, Gary <gwalb...@gmail.com> wrote:
> Ashley,
>
>     A coupld of comments on my previous post (DAMN GOOGLE, why can't I
> edit these?)...  I meant to say "TERMINAL" not "TERM" and I should
> have pointed out that the formatting is probably NOT the problem as
> the keys that sort quickly are formatted just like the keys that sort
> poorly.  It REALLY seems that the determining factor is the number of
> identical keys.
>
> GaryW




--
Ashley Chapman

Gary

unread,
Aug 9, 2009, 9:56:01 AM8/9/09
to OpenQM-OpenSource
Ashley,

I believe that this is NOT an index problem. Look again at my
last post. TERMINAL is NOT an index! (Neither is DOCUMENT.NO.) So
it seems the sort times have nothing to do w/ alternate keys, but
rather with the uniqueness of the sort "keys" themselves. It seems
that this must be a deficiency in the sort routine itself. Is it
possible that what is happening here is related to HUGELY unbalanced B-
trees? I haven't looked at the code, but perhaps you are more
familiar.

GaryW



Gary

unread,
Aug 9, 2009, 10:32:50 AM8/9/09
to OpenQM-OpenSource
Ashley,

I'm guessing here that the b-tree insertion code looks something
like:

if keya<keyb then insert_left else insert_right

What's probably happening is that if all the keys are equal the sort
is behaving like a (bad) insertion sort rather than a tree sort.

Perhaps something like:

if keya<keyb then insert_left
else if keya>keyb then insert_right
else insert_random_left_or_right.

would solve the problem.

GaryW

Diccon

unread,
Aug 9, 2009, 1:46:19 PM8/9/09
to OpenQM-OpenSource
Wow, Gary, your getting ahead of me :) Not bad for the guy with the
problem.
* Yes, it seems to potentially be an issue with the BTree sort, not
actually the index. But I haven't/can't yet prove that its not a nock
on effect caused by something else.
* The formatting in the dict shouldn't bother the sorting or use of
the index (in theory). Neither should the complexity of the I dict
types you have, as long as they work at all any indexes on them should
work. To be sure everything is fine, make sure you have run
COMPILE.DICT on your file and perhaps BUILD.INDEX again to make sure
the index is ok. But you seem to have found its not about indexes
anyway.
A Btree sort it's intrinsically going to suck if trying to sort
something with low cardinality (Lots of the same things), but still
should not be causing the massively pants results you and I are
getting. I also still get this on a dev system by the way so I can
replicate your issue, your not alone.

You have prompted me to remember something. I remember after some
drastic changing of Query statements I had some incorrect junk on the
end of my query, which the command line accepted, but seemed to cause
massive slow down. It points the opposite way round from what you've
found, however it does point at the place that is causing the problem.
One program reads in your query for all kinds of statements (All your
Sorts, selects, etc), that is QPROC. It desides what params to set for
the big query executer in the sky. I know there were some issues with
the way it parses all the complex properties available in the past.
But that was back in QM 2.4, I had thought they were resolved by
2.6-6, which you should be running.

Martin, if you did fix some parsing issues with QPROC can you point me
in the direction of the issue and I can apply a fix. (I know you won't
respond Until money)

I'm analysing the QPROC and BTree sorting algorithms at the moment,
trying to find this very bug for myself (and the project) andway, I'll
keep you posted Gary. Let me know as you get Test cases that Do or
Don't work, each one points me closer to where the cause is.

-Diccon

Diccon

unread,
Aug 9, 2009, 1:51:36 PM8/9/09
to OpenQM-OpenSource
Also yes, I would count this as a bug. It may be a mildly inconsistent
bug, initiated by certain conditions, but its a bug.
The day our Modern Dev server was being beaten by an old crummy P2
600Mhz with 32meg of RAM I knew something was wrong. But our issue was
solved by SORTMEM increase on a Hunking server.
We MUST find it and resolve it, it comes back again later I will kick
myself.

-Diccon

Ashley Chapman

unread,
Aug 9, 2009, 5:49:37 PM8/9/09
to openqm-o...@googlegroups.com


2009/8/9 Gary <gwal...@gmail.com>

Looks like you're narrowing down the issue.

I'm not sure at this point if the problem is caused by slowness accessing the file, or with slowness in the memory processing.  Can be hard to separate the two, as the file will probably be completely cached in memory anyway.

It's probably worth taking a quick look at ANALYSE.FILE command, and also the techtip on the ladybridge site about how the linear file hashing works.
http://www.openqm.com/cgi/lbscgi.exe?T0=h&X=t10xx980xp&t1=technote
Although it does not look like a file I/O issue.

Diccon, the dictionary can break indexing in some circumstances.  Martin did a fix to the commercial version I think.  I can't recall the details, but I think it was involving spaces or some such.


Ashley Chapman

Diccon

unread,
Aug 10, 2009, 10:09:26 AM8/10/09
to OpenQM-OpenSource
On 9 Aug, 22:49, Ashley Chapman <ash.chap...@gmail.com> wrote:
> 2009/8/9 Gary <gwalb...@gmail.com>
>
> Looks like you're narrowing down the issue.
>
> I'm not sure at this point if the problem is caused by slowness accessing
> the file, or with slowness in the memory processing.  Can be hard to
> separate the two, as the file will probably be completely cached in memory
> anyway.
>
There is one way to find out. You mentioned we have no Explain Plan as
Oracle does, not much reason why we shouldn't. To start with we can
essentially do a pritty output of the boolean decisions QPROC makes.
My goal would be to insert optional timing diagnostic. So you turn it
on and you get visual feedback at which stage it is at and how long
each one is taking. Would help us in the future in tweeking perfomance
or integrating multiple sorting algorithms.

> It's probably worth taking a quick look at ANALYSE.FILE command, and also
> the techtip on the ladybridge site about how the linear file hashing works.http://www.openqm.com/cgi/lbscgi.exe?T0=h&X=t10xx980xp&t1=technote
> Although it does not look like a file I/O issue.
>
Bizarr thing for me is I just pulled a Problem file from one of our
Dev systems and put it on my personal box and the problem doesn't
repeat. Which point squarely at resource issues as opposed to the data
causing an issue. I can see why Martin always thought we were talking
rubbish about this, but it IS an issue. Despite low resources it
shouldn't be this bad. Just having trouble identifying a repeatable
cause case.

> Diccon, the dictionary can break indexing in some circumstances.  Martin did
> a fix to the commercial version I think.  I can't recall the details, but I
> think it was involving spaces or some such.
>
Ok noted. I will try and trawl the OpenQM mailing list and find it :P

Diccon

unread,
Aug 10, 2009, 10:10:03 AM8/10/09
to OpenQM-OpenSource
Martin:
I understand the process of dumping the BTree to file, sounds fine.
I'm not sure I follow why your keeping the number of cached files down
by re merging then when you get above X number. What was your thinking
on this?
Also worth Mentioning thanks for speaking up here, your input saves us
hours in reverse engineering. We might help you find an underlying/un-
found bug :)

Martin Phillips

unread,
Aug 10, 2009, 10:37:33 AM8/10/09
to openqm-o...@googlegroups.com
Hi Diccon,

> I understand the process of dumping the BTree to file, sounds fine.
> I'm not sure I follow why your keeping the number of cached files
> down by re merging then when you get above X number. What was
> your thinking on this?

Ultimately, the select list has to end up as a sorted list of ids. A big
select may produce many intermediate files, each up to SORTMEM in size. It
would be possible for READNEXT to look at all of these to work out which is
next every time that it fetches an id but the performance is far better if
the merge is done a few files at a time. This technique has been a common
part of sorting for as long as I've been in the industry (too long to
mention!).

I have not been following the other issues in this thread but I have noticed
a few odd points along the way....

There is already a very basic diagnostic in the query processor to show how
it will handle a query. This is activated using the DEBUGGING keyword but
also requires use of OPTION $QUERY.DEBUG first. This diagnostic is intended
for trapping errors in the query parser but does help to show how a query
will be processed.

A sorted select where the ids are already in order will result in a very
unbalanced sort tree. This is one good reason not to take SORTMEM up too
high.

A query with multiple selects will process them essentially left to right
though there is some optimisation. A query that finds an active select list
when it starts uses that as the basis for its processing and will then
examine each record in that list to see if it meets any new selection
criteria.

There are several known bugs in the query processor in the GPL source, all
fixed in the commercial product. Such are the penalties of open source.

Steve Bush

unread,
Aug 10, 2009, 10:59:56 AM8/10/09
to openqm-o...@googlegroups.com
Hi Martin,

Just out of curiosity, how do you justify calling your product OPENQM now?
.. since it is closed source. Are you planning to go back to calling it just
QM?

If I were to decide to use OPENQM, during an IT audit people would be asking
the same thing and at the moment I cant think of a decent answer.

B Regards, Steve

Martin Phillips

unread,
Aug 10, 2009, 11:11:53 AM8/10/09
to openqm-o...@googlegroups.com
Hi Steve,

Let's not start this argument again.

There is an open source version. You and I agreed that it would be treated
as a sandbox for developers who want to play with the possibility of
contributing things back to the mainstream source. We also agreed that it
would not be updated unless there was a massive shift in how it is being
treated by the open source community.

Steve Bush

unread,
Aug 10, 2009, 11:23:05 AM8/10/09
to openqm-o...@googlegroups.com
It is not an argument Martin. It is a simple question.

What should I say to an IT department/auditor who questions why the product
I use is called OPENQM and yet the source code FOR THE PRODUCT BEING USED is
not available?

I will lose all credibility if I attempt to reply with your current response
which is "There is an open source version available BUT it is USELESS
because it is out of date and DOESN'T EVEN RUN on the platform in question".

I hope my point is clear.

As for the intent of the open source version I am entirely in agreement with
you.

B Regards, Steve.

Martin Phillips

unread,
Aug 10, 2009, 11:36:20 AM8/10/09
to openqm-o...@googlegroups.com
Hi Steve,

> What should I say to an IT department/auditor who questions why
> the product I use is called OPENQM and yet the source code FOR
> THE PRODUCT BEING USED is not available?

Why should they care what the product is called? What matters is that they
get a robust commercial grade product with outstanding support. If they
really want to know why it has "open" in the name, explain about the sandbox
version.

I have yet to come across a user who has raised the concern that you cite.

Ashley Chapman

unread,
Aug 10, 2009, 11:44:57 AM8/10/09
to openqm-o...@googlegroups.com
What should I say to an IT department/auditor who questions why the product
I use is called OPENQM and yet the source code FOR THE PRODUCT BEING USED is
not available?

I've been using OpenInsight for years, and have  experienced two software audits.  Neither time did the auditors ask the above question, so I think it's unlikely to be a problem.

However, I have been asked by a banking client if I can provide indemnity from the database provider OR full source code.  Ladybridge don't provide the source code any more (the commercial and GPL versions have diverged too far), and cannot provide the required level of reassurance that IBM and Oracle do.

This is extremely frustrating for me, and prevents me from using QM in a commercial environment.


Ashley Chapman

Martin Phillips

unread,
Aug 10, 2009, 11:53:43 AM8/10/09
to openqm-o...@googlegroups.com
Hi Ashley,
 
> However, I have been asked by a banking client if I can provide
> indemnity from the database provider OR full source code.
 
We do have clients for whom we enter into an escrow agreement that would give them full access to the source if we ceased trading for any reason.

Ashley Chapman

unread,
Aug 10, 2009, 11:59:12 AM8/10/09
to openqm-o...@googlegroups.com


2009/8/10 Martin Phillips <martinp...@ladybridge.com>

Hi Ashley,
 
> However, I have been asked by a banking client if I can provide
> indemnity from the database provider OR full source code.
 
We do have clients for whom we enter into an escrow agreement that would give them full access to the source if we ceased trading for any reason.

With respect, Martin; Escrow is useless.

Escrow relies upon the deposited source code being regularly updated, and also being verified as complete.  This is a very expensive option, and most software houses manage to get away with making a declaration that the code will be lodged with a legal firm or similar. 

Nowadays, only the most gulible client will fall for that! 

 
 
Martin Phillips
Ladybridge Systems Ltd
17b Coldstream Lane, Hardingstone, Northampton, NN4 6DB
+44-(0)1604-709200
 





--
Ashley Chapman

Diccon

unread,
Aug 10, 2009, 12:21:26 PM8/10/09
to OpenQM-OpenSource
Gary, are you on the Dev mailing list?
May be useful for you and us if you could follow the planned
development.
You seem to be getting your hands dirt enough on your own to warrant
joining the rest of the devs. At very least least hearing what were up
to :)

Go To:
http://gpl.openqm.com/mailman/listinfo/openqm-dev
to register if your interested.

-Diccon

Martin Phillips

unread,
Aug 10, 2009, 12:30:13 PM8/10/09
to openqm-o...@googlegroups.com
Hi Ashley,
 
> With respect, Martin; Escrow is useless.
>
> Escrow relies upon the deposited source code being regularly
> updated, and also being verified as complete.  This is a very
> expensive option, and most software houses manage to get away
> with making a declaration that the code will be lodged with a
> legal firm or similar. 

Under one of our escrow schemes, the source for every release is deposited in encrypted form on the user's own system Only the encryption key (which doesn't change) is deposited with a third party so the costs are very low. We guarantee that the source code genuinely is that from which we built the release but, as you would expect, the user cannot himself verify completeness unless the escrow trigger condition is met. We do make a subset source available to verify that he can understand the build process.
 
Escrow agreements are negotiated individually with the user concerned to ensure that they meet their requirements.
 
> Nowadays, only the most gulible client will fall for that! 
 
Remind me to pass on your opinion to the clients.

Steve Bush

unread,
Aug 10, 2009, 12:47:42 PM8/10/09
to openqm-o...@googlegroups.com
OK thanks Martin I get that.

However it isn't just what the product is called. Your website slams home
the benefits of OPENQM having an open source version for Linux but there is
no mention that the open source version is well out of date and, even worse,
is currently frozen.

Our best clients are regional members of global advertising agency networks
run out of New York City, London and Paris. These guys are smarter than we
are. They are going to take one look at your web site, ask a few questions
and then say we are dopes for trying to pull the wool over their eyes.

This is our position, I do understand your other clients may not be in the
same position.

B Regards, Steve

> -----Original Message-----
> From: openqm-o...@googlegroups.com [mailto:openqm-
> opens...@googlegroups.com] On Behalf Of Martin Phillips
> Sent: 10 August 2009 19:36
> To: openqm-o...@googlegroups.com
> Subject: [OSS] Re: Performance question
>
>

Ashley Chapman

unread,
Aug 10, 2009, 1:03:21 PM8/10/09
to openqm-o...@googlegroups.com


2009/8/10 Steve Bush <steve...@neosys.com>


OK thanks Martin I get that.

However it isn't just what the product is called. Your website slams home
the benefits of OPENQM having an open source version for Linux but there is
no mention that the open source version is well out of date and, even worse,
is currently frozen.

Our best clients are regional members of global advertising agency networks
run out of New York City, London and Paris. These guys are smarter than we
are. They are going to take one look at your web site, ask a few questions
and then say we are dopes for trying to pull the wool over their eyes.

This is our position, I do understand your other clients may not be in the
same position.

This is exactly what happened when I put forward a proposal to a client of mine.  Whilst they were considering QM, they monitored the Ladybridge websites, and were appalled that the opensource commitments (e.g. 30 day GPL release lag) had been quietly forgotten.

These guys are not stupid.  They can see that Ladybridge is a one-man band, and that software gets released with minimal levels of QA.  The only way I can sell QM to them is if they can have a source-code licence.  They have this with numerous other providers, and are familiar with the non-disclosure requirements.

Over the last few years, I've spent a long time trying to get this point across.  This is the last time I will raise the subject.

Ashley Chapman
Reply all
Reply to author
Forward
0 new messages