Looks like Mike pretty much summed things up, but let me just add some
comments..
> 1) What is the threshold for parameters/streaming data that we need to
> be aware of to prevent our screens from freezing?
This kinda sounds like a loaded question about another system, but maybe I'm
wrong ;) f I anyone ever sees a freezing screen, they should contact us
immediately.
Like Mike mentioned, the screens should never freeze (that's most likely a
bug, graphics, or hardware problem)... they will just tend to "slow down" as
the engineer adds more and more displays to the desktop. When I say "slow
down", I mean the number of screen refreshes per second decreases as the
load/complexity increases.
There's really no exact threshold on the number of parameters, displays, etc
that we can pin down... It's dependent on so many factors that it's almost
"intractable" to say 100% (computer speed, graphics speed, network speed,
server speed, data rate, aggregate data rate, complexity of derived,
complexity of displays, amount and types of dynamics. etc. etc. etc). What
we *can* say though, is that the "distribution of data" from the CDS to the
client is almost never the main factor (from our bench tests and
experience). What is really the main driver is the amount and complexity of
displays that are on your analysis window(s).
First of all, the most power hungry displays are the Stripcharts,
CrossPlots, Frequency displays, and the 3D models. These displays are
complex and require a fair amount of graphics and or mathematical processing
(the 3D models to the max). Displays with a large amount of "history" as in
StripCharts and Crossplots, can be especially power hungry. The longer the
history displayed, the more power is needed. Imagine a StripChart or
CrossPlot that is showing 3-4 *hours* of data in on display. Now imagine 10
of those on one window... You see where I'm going. That's a lot of data
points to draw, process, and manage.
With all of these possible variables, it's best to just simply keep an eye
on the overall refresh rate of an AnalysisWindow to give you a feel for how
close you are "to the edge". To determine how much things are slowing down
for a given AnalysisWindow, you can bring up the "Performance Window" and
look at the AW's refresh rate per second. Just hold down the <shift> key and
click the "Performance" button on the dashboard. 65 updates per second is
pretty much the upper limit (as fast as it will ever refresh).... Anywhere
from 10 updates per second up is probably visually acceptable, but it's all
up to the FTE's preferences. 1 update per second may be acceptable depending
on what kind of data they are looking at.... But don't get worried, I've
rarely seen update rates below 10 even in the most complex AWs imaginable.
> I know there's an aggregate rate that can't be crossed (or there are
> consequences, like some data not getting recorded, getting clipped,
> etc. ) Not sure what determines that aggregate rate though. I know
> that more parameters at higher rates, gets you closer to the
> aggregate. Does that aggregate change from server to server? How is
> that calculated? Not sure... Any one else...
Mike pretty much covered this.... but I can comment that the "some data not
getting recorded or getting clipped" just doesn't happen. It's either "all"
or "nothing". Either your server can handle the data rate or not. When you
capture data for a given data setup, you'll be able to tell really quickly
from the OpsConsole whether you're in trouble or not.
Of course the answer here may mean you'll need to trim down your data set
and try again.... or it may mean that you need to go out and buy a faster
CDS pc. There's some amazingly fast machines for sale now that are
extrememly cheap... and you'd be hard pressed to create enough data to
"swamp" them ;)
Also, there are a lot of simple techniques that can improve your aggregate
rate vastly. A technique that we used with JSF was to offload all of the
"TPP derived" parameters (like bit picks etc) that were being done on the
TPP to the Iads client. In other words, why crack apart a word on the front
end to create 16+ parameters (each containing only 1 bit of information)
when you can do this easily in an IADS derived equation on the client? Right
there is a 16ish to 1 reduction to the overall data storage and bandwidth
needed... I can't understate the importance of understanding this fact.
> 2) Is there an allocation of data to each room/desktop/analysis window?
> If I understand the question, there's no allocation per desktop per
> say, but is determined by the capacity of the network equipment in the
> control rooms.
Nope.. "The sky is the limit"...
Per room it's "whatever the CDS can handle based on it's load/hardware".
You'll have to test the boundaries of your own hardware's limits by direct
observation (i.e. simulated data). We have programs to simulate a given
parameter load (CDSStressTest), but it's probably better for you just to
setup your TFE/TPP in simulation mode and "try it out".
Per Desktop/Aw, the "cost" of the window update is what limits Engineer's
from over-burdening the entire system. The more they add, the slower *their*
(and only their) windows gets, because it's a distributed system. The CDS
only handles the data.. while each PC handles their own
AWs/Displays/Derived/Dynamics/etc. When they notice that their window is
slow, they tend to simplify things or arrange them differently. It's a
natural limiting factor.
Don't get me wrong, it is true that each engineer affects the other. The
cost of sending data to a given PC in the control room isn't free, but as
I've said before, it's so so much less than the cost of displaying the data
that we can almost leave it out of the equation (i.e. the graphics
performance will limit them way before they begin to effect others in the
room). Put it this way, since IADS only requests data for display that are
currently "visible" on given AnalysisWindow, it would be *extremely*
difficult (if not impossible) for any one Engineer to overload the system.
The only display that would even have a remote possibility to do this would
probably be the ICAW display. It takes a potentially vast amount of
parameters and grinds out a relatively small amount of graphical
information. But there still is one more important limiting factor.
If you want to break it down to semi computer science terms: in order to
request data, each Iads client is attached (via TCP-IP) to it's own "thread"
on the CDS. Each thread is therefore "load balanced" in the operating system
so that it "plays nice" with all of the other threads. This means that even
the data request mechanism is semi "throttled" per IADS client.... which
prevents one client from taking down the entire system.... and as I
mentioned before, the data request mechanism is the "lightest" most
efficient item in the chain. This "throttling" of the data will again show
up as (surprise): a reduced update rate of the AnalysisWindow. This puts us
back to the natural limiting factor argument.
> 3) If it is the room that is sharing the allocation, would that mean
> less Prop parameters if multiple disciplines are available?
>
> Again, if I understand the question, in the strictest sense, yes, if
> there are other disciplines, that may limit the amount of parameters
> in your discipline. The aggregate numbers are pretty high though.
> Any symvionics guys want to help with this...
Yes... Like I said before:
"Either your current CDS PC can handle the data rate, or it can't"
If it can't, the you either have to:
1) Intellegently minimize the number of parameter sent to the CDS by using
techniques mentioned before (moving derived equations into IADS vs TPP)
2) Go out and buy a faster CDS PC
3) Start reducing the parameters available to the engineers (sadly)
We're hoping that if you have a fairly new CDS PC, and you apply rule#1
above, you won't ever have to worry about this situation.
> 4) Do the derived parameters affect this, or is it simply a raw amount
> of streaming data (does the workstation do the derivations)?
Yes, the workstations do the derivations... and *only* when the derivations
are needed/displayed. Remember again, the CDS *only* handles the data (i.e.
caching data server), so derived equations are pretty much "zero" cost to
the CDS in the scheme of things. Also remember, derived parameter inside of
IADS could actually help this situation (by moving TPP or TFE computed
derived parameters to the client).
As Mike mentioned, it all boils down the to amount of TPP parameters that
you are sending the CDS (so look at every "TPP" type parameter in the
ParameterDefaults table). We can safely leave IAP parameters out of the
equation for now because they are rare at this point..
> It depends on where the parameters are derived. If they are derived
> on the TFE, the TFE takes care of the calculation (and that will
> affect the aggregate rate).
True... Because now the CDS has a "new additional parameter" that was
created from existing data. In most cases, it's a waste of resources in my
mind... better to do the derived in IADS in a "pay per view" mode ;)
> Some can be run as external processes on
> a seperate system, and fed back to the CDS (IADS calls them an
> IAP,IADS Auxilliary Process), and again that would affect the
> aggregate.
Very true... but this situation is rare. Really only used for custom
functions that are rather "heavy" and only want time marching forwards (i.e.
MassProp, PAD, NoseBoom, etc).
> They can also be run from the IADS client machine within
> the IADS software. In this last case, the workstation would do the
> derivation, and I don't think that would effect the aggregate rate.
True... As explained before, this doesn't affect the aggregate rate..
> 5) If the workstation does the derivations separately from the
> previous issue, would that set us up for another limitation?
> They have minimum system requirements for the systems used to display
> the screens in the control room, the symvionics folks can better
> answer that one. I'm sure it's also deternined by the number of
> internal derived equations, etc.
Only a limitation on the complexity of their own AW.... and on a modern PC,
this limitation is so far above the normal usage that it would be very
unlikely for the engineer to even notice.
The best approach is just to create the windows and test with data... As
soon as the window is too slow, then partition the display using tabs, more
AWs, or other techniques.
Also, it might be possible for us to optimize or display code to improve on
the refresh rate (which we've done in the past for JSF).
> 6) Is our range under the same limitations that Fort Worth is
> currently working at?
> Any Forth Worth guys want to help with this one.
It's all up to what hardware you guys are running.... I think Mike Burt
covered this fairly well... My guess is that it would be extremely close to
Fort Worth.
That's an easy test. Simply load up your room with a Ft. Worth config and
simulate the data... see what happens. Open up all of the desktops/windows..
Anything slow? OpsConsole show that the CDS is keeping up with the data?
The best answer here is a simulated test.
> 7) Are we comparable to Pres Helo (I realize this is an early question
> as our screens are not mature)? Can we see what they have done?
>
> That can probably be arranged (but don't quote me on that). Pete C
> can probably work that out...
>
> Any one else want to take a crack at some of this...
If this is the JSF FTEs speaking, I would guess that the PresHelo screens
would be a lot less complex.. but that's just a gut feeling ;)
Jim