Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Size of Essbase Cubes

538 views
Skip to first unread message

Jerry

unread,
Jun 3, 2002, 9:47:20 AM6/3/02
to
Hi all,

I have a problem with the size of our Essbase Cubes. I have designed a
cube with 12 dimensions and the size of the cube exploded (~17 GB -
data of one time period).
I have tried to benchmark essbase storage with other tools with
following results:
flat text-file: 6 MB
MS-Analysis Server Cube: 1,5 MB
Essbase Cube: 21 MB (!!!)

Can anybody explain me, why Essbase need much more disk space?
I have talked to the Hyperion-Support and they told me, that the cube
configuration is ok (block-size, ...)

This issue is a key-problem of our project because our customer has to
pay his provider for disk-space.

Can somebody tell me, if Hyperion Essbase 6.5 (with the HOLAP option)
would help to solve my problem?

Thank you very much for helping.
regards,
Jerry

Oliver Gampe

unread,
Jun 3, 2002, 11:08:00 AM6/3/02
to
Jerry wrote...
(in <dff2e087.02060...@posting.google.com>)

> Can anybody explain me, why Essbase need much more disk space?

I don't know Essbase, but I would guess that Essbase does a lot more
pre-summarization than the other tools you've tried and thus leaves less
work to do for the clients that will be accessing the cube.

(I've just created some models with the DI builder and there it's
possible to build models ranging from 300kB to 48MB out of the same
1000-line-flatfile. It just depends on the settings and on the question
"does it make sense" ;-)

--
Ich wuensch' mir was!
Oliver

Today's Teaser:
Smile, it makes the world wonder what you are up to.

Briggs S. Christie

unread,
Jun 3, 2002, 12:41:29 PM6/3/02
to
Jerry,

If block size isn't an issue (dense/sparse settings are keeping the blocks
under 80-100K), the fact that you have 12 dimensions is going to mean
creation of a huge number of sparse member blocks. Are many of these
dimensions really deep, with lots of levels?

It's not that strange to have a 6 MB file create an Essbase page file more
than three times its size (especially with so many dimensions) Consider that
a data point for, let's say, a single day can actually create four points of
data, just in the Time dimension.

1- The day
2- The month that the day is in
3- The quarter that the month is in
4- The year that the quarter is in

You have a few options to try to get started...

1. In my experience, there's never been a reason for 12 dimensions. The
retrieval of data is really difficult (at least one member of each dimension
must be represented), the presentation of the data is confusing to the
users, and 100% of the time dimensions can be combined to reduce the total
number. I'd be interested to know how this thing got so big!

2. Overall size can be reduced using Dynamic Calc (non-store), especially in
the dense dimensions.

I'd be happy to chat with you about your database. Maybe I can help...

Briggs

--
------------------------------------------
Briggs S. Christie
Senior Consultant
Regional Manager, Hawaii
Analysis Team, Inc.
808-228-4392
Visit us at www.AnalysisTeam.com
"Jerry" <Jerry_...@yahoo.com> wrote in message
news:dff2e087.02060...@posting.google.com...

no-...@sonic.net

unread,
Jun 3, 2002, 2:36:05 PM6/3/02
to
None of the 12 dims are candidates for being an attribute dimension?

Jerry <Jerry_...@yahoo.com> wrote:
: Hi all,

--
------------------------------------------------------------------

Jerry

unread,
Jun 4, 2002, 5:46:37 AM6/4/02
to
I have checked this option already, but there are no candidates for
attributre dimensions. I have also contacted the Hyperion support and
they told me, that I have too many dimensions per cube. However, I
have built the same cube in MS Analysis Server and it was much
smaller.

no-...@sonic.net wrote in message news:<pGOK8.7326$3w2....@typhoon.sonic.net>...

Nigel Pendse

unread,
Jun 6, 2002, 6:28:37 AM6/6/02
to

"Jerry" <Jerry_...@yahoo.com> wrote in message
news:dff2e087.02060...@posting.google.com...
> I have checked this option already, but there are no candidates for
> attributre dimensions. I have also contacted the Hyperion support and
> they told me, that I have too many dimensions per cube. However, I
> have built the same cube in MS Analysis Server and it was much
> smaller.

That's almost always the case. Essbase still suffers from database explosion
to a much greater extent than any other OLAP server, however well you design
the cubes, whereas Analysis Services is among the better ones, often
actually imploding the database. Along with the smaller database, it also
calculates a lot faster too, though queries are sometimes (but not always)
slower than Essbase.

See http://www.olapreport.com/DatabaseExplosion.htm


paul

unread,
Jun 10, 2002, 12:20:01 PM6/10/02
to
Database explosion is a theoretical problem for all MOLAP tools. The
number of pre-calculated cells will increase exponentially with the
number of dimensions. Essbase has apparently taken the low-road by
*barely* minimizing this effect with some basic compression options.
So you'll get a smaller data size with most other MOLAP products. (In
Hyperion's defense, storage is generally not a bottleneck as it is in
your case). There still may be alot to try to help reduce it in
Essbase.. of which 6.5 with HOLAP is a definite candidate.

The only compression options you have with Essbase MOLAP are bitmap
and RLE. RLE will save you alot if you're blocks have alot of empty
space (low density), but it's still a shame that's the best you can
get without doing alot of optimizing. That said, what are your
dimensions? A list of your dimensions with their stats -
dense/sparse, number of declared/stored members might help the
conversation. (ESSCMD> GETDBSTATS;)

Attributes would make some serious impact, so it's too bad that one's
out for you. HOLAP and integration server will definitely get you
some page file savings, but this is a brand new feature of the
product, so probably doubtful if there's anyone out there with a good
case to relate yet (???) I'd give it a shot if you have it. What
you're doing there is a 'drill-through' where you can store the detail
for one or more dimensions relationally instead of precalculating the
MOLAP data. This type of drill-though can be done in 6.2 as well, but
requires a little bit more setup within Integration server.

Other than that, you may have to get creative with your design in
order to take the 12 dimensions down to something more reasonable in
your page file(s). For example, try partitioning one of your
dimensions if that's possible. It can improve density to break an
application up into more than one cube. Breaking your cubes up into
smaller pieces can allow you to drop off 'irrelevant' subtrees when
viewed within the context of individual members of a dimension-- for
example, measures that only apply to budget but not actual, and you
partition on the scenario (bud/actual).

The goal is to get a tighter density within each block. Anything over
10 dimensions will likely have large empty space regions. Let's say
you have a months dimension with 12 lev0 members and you're model only
holds the current year. If this is dense, then by now (Jan - June)
you're losing nearly 50% of your database size to empty-space.

Dynamic calc storage on most of your dense upper-level members will
help too if you havent' tried that yet. 12 dimensions is often
over-kill on an OLAP cube though, although you may well be doing
somehting that requires this. You'll never put together a single
report that uses all of them below the top level of detail, so
outright elimination of at least one or two dimensions might be in
order if you're sticking with Essbase.

Generally, your users will tell you they want dimenisons just in
case... because It's nice to have 12 fields of detail 'just so you do'
but I'd bet theres a few dimensions in your model that users will
rarely if ever drill down to. These dimensions would also be the
first in line for candiates as a relational dimension with HOLAP
option, if you can't lose them altogether or break this up into
multiple models. This type of design should work for you with AS as
well though. For some interesting reading on the topics of dimension
choices, etc. try 'OLAP Solutions - building multidim. Inf. Systems'
by Erik Thomsen

Best of luck!


"Nigel Pendse" <nig...@compuserve.com> wrote in message news:<102335946...@dyke.uk.clara.net>...

Nigel Pendse

unread,
Jun 10, 2002, 3:49:36 PM6/10/02
to
Database is *not* a MOLAP issue -- it's to do with too much precalculation,
which can happen with any kind of OLAP (including ROLAP, MOLAP and HOLAP).
In fact, PowerPlay, iTM1 and Analysis Services (in MOLAP mode) all typically
exhibit database implosion, not explosion. Express used to suffer from it,
but is reasonably good these days (though not as good as the three listed
above).

So, among surviving OLAP servers, database explosion is specifically an
Essbase problem. As I said in my previous post, see
http://www.olapreport.com/DatabaseExplosion.htm to understand this
better.Note that has little to do with data compression, which most MOLAPs
do (Essbase being not much worse than the others in this regard)

Nigel Pendse
OLAP Solutions
http://www.olapreport.com


"paul" <phopp...@yahoo.com> wrote in message
news:dd5862f8.02061...@posting.google.com...

zyro

unread,
Jun 10, 2002, 5:37:02 PM6/10/02
to
"Nigel Pendse" <nig...@compuserve.com> wrote in message news:<102335946...@dyke.uk.clara.net>...

what two words have been more controversial in the olap wars than
'database explosion'? i can't think of any. the simple retort is 'disk
is cheap', which it is and gets cheaper every day. my experience is
certainly skewed, but i have never heard tell of any project cancelled
because of the cost of storage. nor have i heard any dba shy away from
bragging about how large his database is.

take it into consideration, yes. but i look at disk space
differentials between essbase and other systems like mileage
differences between cars. in the end, it's the car, not the cost of
gas.

no one can deny that at a pure technical level, smaller is better. i
have been able to consistently reduce the size of an essbase database
by a factor of ten using pkzip. so one day, if hyperion chooses to,
they could license that software and knock that problem out. who knows
when they will decide to do so? probably never because even though
it's a notorious 'problem' that has been remarked upon for years it
obviously hasn't been escalated up the chain of enhancements to the
product. my interpretation? nobody in the customer base needs it that
badly.

besides, even windows nt allows you to run compressed drives. how easy
is that?

Nigel Pendse

unread,
Jun 10, 2002, 5:58:35 PM6/10/02
to

"zyro" <six...@yahoo.com> wrote in message
news:5db607db.02061...@posting.google.com...

It's not the amount of disk that usually limits apps, but just how long it
takes Essbase to precalculate. Also, high speed RAID disk isn't all that
cheap, and if Essbase is going to take tens or even hundreds of times as
much as another product, it actually does matter.

Here's an Essbase vs TM1 example I recently came across: <<The Essbase
system takes 20 minutes to consolidate and developing reports is time
consuming and difficult. ... They asked us in a second time to show them TM1
on their consolidation data. I think most of them did not believe it was
real. Consolidation time < 1 second. Cube size 15MB in TM1, 3GB in Essbase
(The index file alone was 26MB!).>>

OK, maybe the user doesn't care about consuming 3GB disk in Essbase vs
0.015GB in TM1 (a ratio of 200:1), but they most certainly do care about <1
second vs 20 minutes consolidation time (a ratio of over 1200:1).
Incidentally, this was a Pillar site, but they now use TM1.


paul

unread,
Jun 11, 2002, 6:31:21 PM6/11/02
to
"Nigel Pendse" <nig...@compuserve.com> wrote in message news:<102373866...@doris.uk.clara.net>...

> Database is *not* a MOLAP issue -- it's to do with too much precalculation,
> which can happen with any kind of OLAP (including ROLAP, MOLAP and HOLAP).
> In fact, PowerPlay, iTM1 and Analysis Services (in MOLAP mode) all typically
> exhibit database implosion, not explosion. Express used to suffer from it,
> but is reasonably good these days (though not as good as the three listed
> above).
>

OK, I'll agree with you that Data*base* explosion may primarily be an
Essbase problem. What I was getting at is that the *Data* explosion
resulting from the number of permutations of dimension values is a
fact of life for any OLAP system, as you point out ROLAP, MOLAP, and
HOLAP. But this still comes down to how do you accomodate all these
possibilities. How to handle sparsity, Pre-calculation, and
Compression to disk.

> So, among surviving OLAP servers, database explosion is specifically an
> Essbase problem. As I said in my previous post, see
> http://www.olapreport.com/DatabaseExplosion.htm to understand this
> better.Note that has little to do with data compression, which most MOLAPs
> do (Essbase being not much worse than the others in this regard)
>

>From the above referenced link:
>Multidimensional database storage: Contrary to the propaganda by some
vendors, >the type of database technology used to store
multidimensional data has almost >no effect on the explosion
phenomenon. This is a problem that is caused by the >mathematics of
data generation, and has nothing to do with the database >technology
used to store the data — so the popular myth that MOLAPs inevitably
>suffer from database explosion is completely untrue. Indeed,
optimized >multidimensional storage is much more efficient than
relational storage, so a >good MOLAP will always take less disk than a
good ROLAP.

I think we're splitting hairs when it comes to the idea that
compression is not a the distinguishing factor. Are you saying that
the amount of pre-calculation is? OK so you either don't
pre-calculate your data (i.e. let it calc at request time), or you
compress it. What other options are there?

Although I'd agreee that it's a myth that all MOLAPS suffer from
database explosion, I think in general, there's more pre-calculation
in a MOLAP than a ROLAP, or HOLAP, making them more susceptible.

Perhaps the other packages you mention decide for you which
combinations will be pre-calculated or not. So I agree, this isn't
compression. Still, Essbase can be set to pre-calculate as well, it's
just a matter of your configuration, and it more of an implementation
issue than something the software does for you. So assuming you could
mirror a Cognos, or AS, etc. storage volume by doing the 'dynamic
calc' configuration in Essbase, the rest of the difference in disk
storage would still be due to the amount of compression.


Forgive me if I'm oversimplifying, but if so what am I overlooking
here?

Nigel Pendse

unread,
Jun 12, 2002, 3:42:51 AM6/12/02
to
"paul" <phopp...@yahoo.com> wrote in message
news:dd5862f8.0206...@posting.google.com...

<snip>


> I think we're splitting hairs when it comes to the idea that
> compression is not a the distinguishing factor. Are you saying that
> the amount of pre-calculation is? OK so you either don't
> pre-calculate your data (i.e. let it calc at request time), or you
> compress it. What other options are there?

Yes, pre-calculation is the main factor, just as the article says.

>
> Although I'd agreee that it's a myth that all MOLAPS suffer from
> database explosion, I think in general, there's more pre-calculation
> in a MOLAP than a ROLAP, or HOLAP, making them more susceptible.

Simply not true. There is more pre-calculation in *Essbase*, not in other
MOLAPs and HOLAPs, than a ROLAP. It's why all the best MOLAPs have much less
disk consumption than *any* ROLAP, regardless of application size.

>
> Perhaps the other packages you mention decide for you which
> combinations will be pre-calculated or not. So I agree, this isn't
> compression. Still, Essbase can be set to pre-calculate as well, it's
> just a matter of your configuration, and it more of an implementation
> issue than something the software does for you. So assuming you could
> mirror a Cognos, or AS, etc. storage volume by doing the 'dynamic
> calc' configuration in Essbase, the rest of the difference in disk
> storage would still be due to the amount of compression.

Yes, if you pre-calculated all to the same extent, then compression would
indeed be the main factor. But you don't have that option, as Essbase won't
work if you tried to do such as little pre-calculation as the others.

>
>
> Forgive me if I'm oversimplifying, but if so what am I overlooking
> here?

You're not oversimplifying, but failing to understand how the products work.
As I said compression is *not* the main factor. Most MOLAPs do some level of
compression, and the difference between the best and the worst is probably
no more than perhaps 2:1. But some are designed to do a lot more
pre-calculation than others, and the difference can easily be 100:1 or more,
so this is a much, much more significant factor. I think Analysis Services
and PowerPlay do have slightly better data compression than Essbase, but
this is only a minor factor, and even if Essbase was as good, it would still
suffer badly from database explosion.

With Essbase, you can only reduce the pre-calculations to a limited extent
(though it's better than it used to be); with iTM1, there's no
pre-calculation at all. With Analysis Services and PowerPlay there is some
degree of pre-calculation which can be set automatically or controlled to
some extent by the user; in both cases, it will be far less than Essbase.

The problem is that Essbase started out with a 100% pre-calculation strategy
that was copied from FCS-Multi, the product that was used by its original
authors in EPS, their previous company, in the 1980s. That in turn was a
copy of System W, whose design decisions were made in 1980, for mainframes.
It suffered from database explosion in *exactly* the same way as Essbase.
MicroStrategy also used to suffer from it (despite being a ROLAP), until
this was changed in an early release.

Analysis Services, PowerPlay and TM1 were designed from scratch to have a
different strategy, one that took advantage of more modern PC-style
architectures (with lots of RAM and CPU, but slower disk and I/O than
mainframes). In general, they pre-calculate as little as possible. Not only
does this greatly reduce build time and disk space consumption, as expected,
but it also seems to help query times in a way that hadn't been expected. It
would be hard to change Essbase to work in the same way without junking both
its calculation engine and its underlying database design, as well as its
data write-back approach (Analysis Services and iTM1 both implement
write-back in a different way). To do that would simply turn Essbase into a
copy of Analysis Services -- hard to sustain when it's so much more
expensive, though Hyperion may have no choice if it wishes to make Essbase
competitive with more modern products.

John Keeley

unread,
Jun 12, 2002, 11:51:28 AM6/12/02
to
Nigel,

Interesting post that nicely summarises the different precalculation
strategies of Essbase, TM1 & AS.
Can you explain a little more about how their write-back strategies
differ?
Perhaps including Oracle in the discussion.

Regards,

John


"Nigel Pendse" <nig...@compuserve.com> wrote in message news:<ae6u1t$n30$1...@nntp-m01.news.aol.com>...

Nigel Pendse

unread,
Jun 12, 2002, 4:05:13 PM6/12/02
to

"John Keeley" <john....@chaucerplc.com> wrote in message
news:7454b5cb.02061...@posting.google.com...

> Nigel,
>
> Interesting post that nicely summarises the different precalculation
> strategies of Essbase, TM1 & AS.
> Can you explain a little more about how their write-back strategies
> differ?
> Perhaps including Oracle in the discussion.

Essbase, like the products from which it was derived, keeps previous stored
input data, data updated by users and calculated results in the same place.
That limits the scope for compression, as you can't randomly write to
compressed data structures which don't have 'holes' waiting for null cells
to be populated. Essbase data blocks have to be able to grow or shrink as
the amount of data in them changes, and this also causes page file
fragmentation as blocks get moved. But there is an element of compression in
the blocks -- the trade-off is that more compression would have a cost as
the data would have to be compressed every time a block was written to disk,
and decompressed every time even a single cell had to be read if the black
wasn't in cache.

Analysis Services keeps write-back data and aggregates separate from stored
input data, which can therefore be much more compressed. Only delta values
are stored for updates, and existing calculated results can still be used in
conjunction with the delta values -- so the effect of some changes can be
seen very rapidly.

TM1 never writes data updates to disk -- the active data is always in RAM.
When cubes are stored on disk they can therefore be compressed as they don't
need to support random read or write operations. And no aggregates are
stored.

Express is more like Essbase, but now has better support for sparse
aggregates (which is why it produced APB-1 benchmark figures back in 1999
that Essbase has never been able to match).


paul

unread,
Jun 12, 2002, 6:50:03 PM6/12/02
to
Ok fair enough. Thanks for your comments... Just so you know where I'm
coming from, I'm an IT analyst at an all Essbase shop (currently).
I'm open to considering AS as a matter of fact, since we already own
it as well to some extent.

>
> Yes, pre-calculation is the main factor, just as the article says.
>

I plead guilty to not reading your entire article the first time.
Very glad I read it through this time.


> Yes, if you pre-calculated all to the same extent, then compression would
> indeed be the main factor. But you don't have that option, as Essbase won't
> work if you tried to do such as little pre-calculation as the others.
>

...

OK, so then is the real problem with Essbase the fact that you can't
retrieve enough dynamically calculated data, forcing you to
pre-calculate? What is it about that part of their engine which causes
the big hit? Is it due to support for more complex calculations or
just inefficiency?

I'd be interested to see how Essbase 6.x stacks up on your data
explosion graph in your conclusions, alongside AS and Cognos as well.
(http://www.olapreport.com/DatabaseExplosion.htm#Conclusions) Why is
it that you recommend the AS pre-calculation optimizer as the hands
down best solution to the problem, but it's not even on the graph?

Also, what about the flip side? Surely a pure-RAM solution like iTM1,
which uses the lowest storage space, falls into the 'long query time'
category on larger apps (and/or larger apps become impossible).


> >
> >
> > Forgive me if I'm oversimplifying, but if so what am I overlooking
> > here?
>
> You're not oversimplifying, but failing to understand how the products work.

Lets say it's understanding how the *calculation engines* for the
*other* products work that may be causing my perception gap here.
Specifically the inner details of the various approaches to dynamic
calculation vs. batch calculation/ pre-calculation.


> As I said compression is *not* the main factor. Most MOLAPs do some level of
> compression, and the difference between the best and the worst is probably
> no more than perhaps 2:1. But some are designed to do a lot more
> pre-calculation than others, and the difference can easily be 100:1 or more,
> so this is a much, much more significant factor. I think Analysis Services
> and PowerPlay do have slightly better data compression than Essbase, but
> this is only a minor factor, and even if Essbase was as good, it would still
> suffer badly from database explosion.
>
> With Essbase, you can only reduce the pre-calculations to a limited extent
> (though it's better than it used to be); with iTM1, there's no
> pre-calculation at all. With Analysis Services and PowerPlay there is some
> degree of pre-calculation which can be set automatically or controlled to
> some extent by the user; in both cases, it will be far less than Essbase.
>

Your arguments (in the article) for the AS calculation engine are very
convincing, and certainly consistent with what I've heard from other
practicioners. And believe me, at this point in my career, I've got a
niggling concern about getting stuck holding the 'Essbase bag' if you
will. What does AS allow in terms of the following two issues which
I'm currently comfortable dealing with in Essbase? (given an
assumption that I prefer minimal custom programming to API hooks)

1. Drill-through to SQL transactional data
2. Excel front-ends that allow writeback for budgeting users

Nigel Pendse

unread,
Jun 13, 2002, 4:25:31 AM6/13/02
to

"paul" <phopp...@yahoo.com> wrote in message
news:dd5862f8.02061...@posting.google.com...

> Ok fair enough. Thanks for your comments... Just so you know where I'm
> coming from, I'm an IT analyst at an all Essbase shop (currently).
> I'm open to considering AS as a matter of fact, since we already own
> it as well to some extent.
<snip>

>
> OK, so then is the real problem with Essbase the fact that you can't
> retrieve enough dynamically calculated data, forcing you to
> pre-calculate? What is it about that part of their engine which causes
> the big hit? Is it due to support for more complex calculations or
> just inefficiency?

Probably more of the latter than the former. In theory, Analysis Services
supports more complex calculations than Essbase, though on-the-fly
calculation performance can be poor (it's very fast at aggregating, not
nearly so fast at other calculations). And Analysis Services currently has
the very annoying weakness that it refuses to store the results of
calculations other than aggregations, which has the effect of disqualifying
from some interactive financial apps..

>
> I'd be interested to see how Essbase 6.x stacks up on your data
> explosion graph in your conclusions, alongside AS and Cognos as well.
> (http://www.olapreport.com/DatabaseExplosion.htm#Conclusions) Why is
> it that you recommend the AS pre-calculation optimizer as the hands
> down best solution to the problem, but it's not even on the graph?

Microsoft has never published APB-1 figures, so it can't be compared
directly. But, in most cases, Analysis Services produces figures almost as
good as iTM1's.

>
> Also, what about the flip side? Surely a pure-RAM solution like iTM1,
> which uses the lowest storage space, falls into the 'long query time'
> category on larger apps (and/or larger apps become impossible).

Yes, iTM1 isn't suitable for really large apps. But the calculation engine
is blindingly fast, so for interactive apps that require data
entry/immediate recalc, it can probably go higher than either Analysis
Services or Essbase. For read-only/mainly-read apps, iTM1 definitely isn't
as scalable as the others.

<snip>

> Your arguments (in the article) for the AS calculation engine are very
> convincing, and certainly consistent with what I've heard from other

> practitioners. And believe me, at this point in my career, I've got a


> niggling concern about getting stuck holding the 'Essbase bag' if you
> will. What does AS allow in terms of the following two issues which
> I'm currently comfortable dealing with in Essbase? (given an
> assumption that I prefer minimal custom programming to API hooks)
>
> 1. Drill-through to SQL transactional data
> 2. Excel front-ends that allow writeback for budgeting users

Analysis Services supports automatic drill-down to the relational data from
which the cube was derived (though not every client tool implements this
feature). It also has something called "Actions", which are arbitrary
commands defined on the server and seeded by the coordinates of the point
where the user selects them. They could include, for example, calling up a
dynamic URL, doing any sort of SQL command, or running a parameterized
program such as a relational report writer.

There are at least ten Excel front-ends available for Analysis Services, at
least five of which support write-back. For example, Comshare's MPC product
includes a read-write Excel add-in, designed for budgeting, that supports
both Essbase and Analysis Services.

Most read-write front-ends for Analysis Services allow proper "what if?"
stuff, too -- something you don't get with Essbase or iTM1. That is, you can
make temporary data changes, immediately see the results of the changes
anywhere in the cube, then decide whether or not to commit the change. Until
you do, your changes are private to you. John can probably tell you more
about which Analysis Services Excel add-ins he likes best as he's tried out
quite a few.


Arthur Lee

unread,
Jun 13, 2002, 8:39:35 AM6/13/02
to
I will not comment on Nigel's comments regarding AS but will on iTM1.
Your comments regarding 'long query' times really depends on your
definition of it. Does it include loading the data, calculating it
and then retrieving a query? If so, I would argue that Applix iTM1
is one of the most scalable OLAP platforms in the marketplace today.

Your comparison of the lowest storage space in RAM is an "apples to
oranges" type of comparison since Essbase and Applix iTM1 use
different methods of storing it base and calculated data. If fact,
the size of the application may have little bearing on the scalability
of an Applix iTM1 application.

Arthur Lee
Applix, Inc.
www.applix.com

phopp...@yahoo.com (paul) wrote in message news:<dd5862f8.02061...@posting.google.com>...

Nigel Pendse

unread,
Jun 13, 2002, 12:25:39 PM6/13/02
to

"Arthur Lee" <al...@applix.com> wrote in message
news:9cf027c2.02061...@posting.google.com...

> I will not comment on Nigel's comments regarding AS but will on iTM1.
> Your comments regarding 'long query' times really depends on your
> definition of it. Does it include loading the data, calculating it
> and then retrieving a query? If so, I would argue that Applix iTM1
> is one of the most scalable OLAP platforms in the marketplace today.
>
> Your comparison of the lowest storage space in RAM is an "apples to
> oranges" type of comparison since Essbase and Applix iTM1 use
> different methods of storing it base and calculated data. If fact,
> the size of the application may have little bearing on the scalability
> of an Applix iTM1 application.
>
> Arthur Lee
> Applix, Inc.
> www.applix.com

First of all, let me congratulate you for stating that you work for
Applix -- too many vendor posters try and hide the fact. BTW, you were
responding to Pail's post, not mine, so your comments relate to his.

Secondly, I agree with you that for intensive read-write financial OLAP
applications, iTM1 is unbeatable. Neither Analysis Services nor Essbase
comes close, though Analysis Services is usually quicker than Essbase.
Because it's so fast, iTM1 can actually handle significantly larger apps
than Analysis Services or Essbase and still deliver a good response
following a data change. And unlike Essbase, you don't have to remember to
run any scripts to make sure that calculated results are in line with the
latest input data.

But for read-only/mainly apps, other disk-oriented products can handle much
large apps: larger dimensions, and much more input data. And please don't
mention the legendary 64-bit version of iTM1 that almost no-one has
bought...

John Keeley

unread,
Jun 14, 2002, 7:36:25 AM6/14/02
to
How wonderful to hear from someone from Applix; I thought you were all
asleep!

I'm a big fan of TM1 & I'm disappointed that you haven't made the most
of the product.

Am I being unfair by saying that it seems you have rested on it
laurels with its success in financial solutions & let the product sell
itself?
Maybe 5 years ago it did sell itself.
But by not making it an OLAP tool that could be central to a DSS that
supports the whole company, not just finance, you have let Microsoft
take over.

It's probably too late now to stop the Microsoft bandwagon, but I hope
you are going to raise your game, or sell TM1 to someone who knows its
worth.

Regards,

John

al...@applix.com (Arthur Lee) wrote in message news:<9cf027c2.02061...@posting.google.com>...

Jerry

unread,
Jun 24, 2002, 11:12:48 AM6/24/02
to
well, it's true that (raw) disk space is not expensive. however, it
can become expensive, if a customer has to buy it from a provider. e.g
it is not the core compentency of a bank to adminitrate their servers.
that's why they outsource the administration to a third party, which
is resposible for disk-space, security, ...
now, they have to buy the disk space and the service of administation
and the price per MB disk space becomes more higher.

Hyperion call themself as market leader in MOLAP- but I don't
understand it because of the bad performance (and high cube-size) of
their Essbase.

>six...@yahoo.com (zyro) wrote in message >news:<5db607db.02061...@posting.google.com>...

0 new messages