Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Has anyone produced a board using Kicad?

20 views
Skip to first unread message

Lord Vain

unread,
May 13, 2006, 2:18:45 AM5/13/06
to
I was wondering if anyone here has ever designed and produced a working PCB
with Kicad, the free open-source CAE package. I've looked around but haven't
found anything.

*** Posted via a free Usenet account from http://www.teranews.com ***

Ian Bell

unread,
May 13, 2006, 3:28:50 AM5/13/06
to
Lord Vain wrote:

> I was wondering if anyone here has ever designed and produced a working
> PCB with Kicad, the free open-source CAE package. I've looked around but
> haven't found anything.
>

Try looking at the kicad user group at yahoo:

http://groups.yahoo.com/group/kicad-users/

I am sure you will find many users there who have successfully made boards.

Ian

Hal Murray

unread,
May 16, 2006, 1:09:03 AM5/16/06
to
>I was wondering if anyone here has ever designed and produced a working PCB
>with Kicad, the free open-source CAE package. I've looked around but haven't
>found anything.

I haven't built any board with it, but I did work with it enough to
be sure I could use it if I needed to.

I think the key idea is that the auto-router is next to useless.
Are you prepared to hand route your whole board?
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.

fpga...@yahoo.com

unread,
May 17, 2006, 10:56:42 PM5/17/06
to

Hal Murray wrote:
> I haven't built any board with it, but I did work with it enough to
> be sure I could use it if I needed to.
>
> I think the key idea is that the auto-router is next to useless.
> Are you prepared to hand route your whole board?

I still use an older version of PCB (050318) which is fast. And I do
hand route almost everything, as it's autorouter isn't up to a high end
($30K) router either. On the other hand, I also layout very dense
boards that even OrCad/Cadence chokes badly on too, so I've gotten used
to hand layout as required, rather than optional. I wish the gEDA
integration was as good as Kicad at times, but it's never been a severe
problem either.

I've turned several high density boards with PCB without problem that
are double sided smt, small form factor (PC104+) with BGA560 FPGA,
multiple TSOP2 SDRAMs, EEPROMs, Compact Flash, and speciality analog
audio, ISA/PCI -- several thousand connections in the 3.75" x 3.5" form
factor. I gave that boards flat netlist to another designer that was
unable to get OrCad to route it with the same placement (or any
placement). Took me a little over a week to hand layout with PCB, and
he spent that much with an unusable OrCad effort after given both a
working placement and netlist.

I've also done several 16" x 22" dense double sided 6-8 layer smt
layouts with PCB that have a couple hundred thousand connections, also
hand layout, that took a couple weeks. I again had one of those already
flat netlists handed to another OrCad guy, and it was unable to route
it either. Over the years, several clients have autorouted my (and
other engineers) designs, only to end up with useless mazes that are
unmaintainable, and difficult to debug due to signal integrity issues.
Autorouting is highly over rated for high speed digital design, using
dense smd packages.

Humans, with a little practice, still do MUCH better.

DJ Delorie

unread,
May 18, 2006, 12:02:19 PM5/18/06
to

fpga...@yahoo.com writes:
> I still use an older version of PCB (050318) which is fast.

The HID project is done, so you might be interested in trying the
latest snapshot (20060422). You can configure it for either Gtk or
Lesstif (or openmotif, depending on what you have). The lesstif hid
is very streamlined, and a number of our regulars claim it's much
faster and easier to use than what they were used to.

The Gtk hid looks much like what you're used to.

FYI: we've gotten the Gtk hid to build on windows, and the lesstif hid
builds on Mac OS/X (and solaris, hp/ux, aix, irix, etc).

> And I do hand route almost everything, as it's autorouter isn't up
> to a high end ($30K) router either.

I've commented that we need a volunteer to replace the gridless router
with a topological router. Gridless just can't keep up with today's
dense layouts. I'm sure the topological router won't be up to
everything either, but it should do a lot better than gridless.

My (hopefully) next "big" project is a global trace "puller" that lets
you be sloppy with your layout, and have PCB clean things up for you
(straight runs, graceful curves, minimum trace length). We'll see if
I ever finish (or even start) it ;-)

> I wish the gEDA integration was as good as Kicad at times,

It's a common request, and we're making *some* progress with it. Dan
just offered his first pass at a "mode menu" for gschem, which lets
you do pcb-specific things right from the gschem menu. PCB also has a
listener port for remote control. He's working on tying things
together in useful ways (like cross-selecting).

> I've turned several high density boards with PCB without problem
> that are double sided smt, small form factor (PC104+) with BGA560
> FPGA, multiple TSOP2 SDRAMs, EEPROMs, Compact Flash, and speciality
> analog audio, ISA/PCI -- several thousand connections in the 3.75" x
> 3.5" form factor.

Pretty pictures? (and I'm always on the lookout for .pcb files for my
testing, if you have some tricky ones you can send me).

> I've also done several 16" x 22" dense double sided 6-8 layer smt
> layouts with PCB that have a couple hundred thousand connections,

Wow.

The lesstif hid of PCB allows boards up to about a quarter mile per
side, in case you need to go even bigger :)

samiam

unread,
May 18, 2006, 3:22:23 PM5/18/06
to
> Wow.
>
> The lesstif hid of PCB allows boards up to about a quarter mile per
> side, in case you need to go even bigger :)

PCB is used by serious hobbyists (like myself) and probably even
commercial groups ... my friend works in an R&D lab and uses it to
run off prototypes ... I never asked if its put into commerical
work but hes in the embedded field ... so you guess.

I am sure you (and the other developers) already know this .. if you
get regular feedback ;)

DJ Delorie

unread,
May 18, 2006, 3:50:54 PM5/18/06
to

samiam <samiamS...@spamalert.com> writes:
> PCB is used by serious hobbyists (like myself) and probably even
> commercial groups ... my friend works in an R&D lab and uses it to
> run off prototypes ... I never asked if its put into commerical work
> but hes in the embedded field ... so you guess.

I know of a few companies that use it for commercial products, but
they're not all willing to disclose their EDA process to the public.
It's a competitive field out there.

Hal Murray

unread,
May 19, 2006, 11:31:51 PM5/19/06
to

>The lesstif hid of PCB allows boards up to about a quarter mile per
>side, in case you need to go even bigger :)

Many years ago, my boss wrote a CAD package. When he got the
gerber output part working, he took his laptop down to the local
board house and handed them a floppy disk. They read it in to check
things. Something was off by 10^6. I forget which way. The board
was either the size of a period or it covered all of San Jose.

A few quick edits fixed that.

fpga...@yahoo.com

unread,
May 20, 2006, 1:29:59 AM5/20/06
to

DJ Delorie wrote:

> fpga...@yahoo.com writes:
> > I wish the gEDA integration was as good as Kicad at times,
>
> It's a common request, and we're making *some* progress with it. Dan
> just offered his first pass at a "mode menu" for gschem, which lets
> you do pcb-specific things right from the gschem menu. PCB also has a
> listener port for remote control. He's working on tying things
> together in useful ways (like cross-selecting).

It really needs to be the same tool, one netlist, one symbol library
(with referencing a schematic footprint and a pcb footprint, with
common pin naming/annotation), and two physical windows (one for the
schematic domain, and the other for the PCB domain), with a common
working file which contains the physical "tracks" and object placement
for both.

That way when you hand route the pcb assigning pins, new parts and nets
show up as unplaced symbols and rats on the schematic window. And when
you add schematic objects and connections they show up as unplaced
footprints and rats on the pcb window.

DJ Delorie

unread,
May 20, 2006, 10:30:33 AM5/20/06
to

fpga...@yahoo.com writes:
> It really needs to be the same tool,

Or at least *seem* like it's the same tool. Otherwise, I agree.

fpga...@yahoo.com

unread,
May 21, 2006, 7:14:28 PM5/21/06
to

On larger designs, memory is being pushed to maintain lists and objects
instantiated already. Paging severely cuts into performance. When
running as a separate application, there is substantial page
replication introduced for every data page for a long list of shared
library instances, plus replication of the netlists. Likewise,
performance is critially tied to working set, having a second
application running concurrently with equally large working set, will
provoke substantial cache thrashing, which will show up as memory
latency induced jerkyness in the UI, as the cache is flushed out and
reloaded between contexts. While these may seem like parameters in the
application architecture that can be ignored, perceived UI performance
is heavily dependent on them. Similarly the communication between
separate applications results in context switches, which causes
additional cache thrashing by including large sections of the kernel in
the working set. Consider the processor is some 20-100 times faster
than L2/L3 cache these days, and the cache is frequently another 10-50
times or more faster than memory. Exceeding cache working sets,
effectively turns the machine into a 50MHz processor again.

There are substantial performance reasons suggesting that it should be
the same application, (just a different thread at most) to conserve
memory resources, and improve performance. While they may not be
critical for toy student projects, for many real life projects which
are much larger, they become critical UI problems. The sample
ProofOfConcept design I sent you, is about 1/5 the size of several
production designs I have done using PCB.

When the typical desktop CPU comes standard with 10MB or better of L2
cache, these issues might go away. Last time I checked, this was only
available for high end Itianum processors, well outside the reach of
most mortals in cost (or me right now).

Stuart Brorson

unread,
May 22, 2006, 7:52:00 AM5/22/06
to
fpga...@yahoo.com wrote:

Interesting points. My comments/questions:

* From your experience, can you quantify how large a design must be
before it begins to hit memory limits when using gEDA/PCB? How many
nets/components? This information would be interesting to the
developers. (Or if your observations are about general computer
performance as opposed to gEDA/PCB, perhaps you could make that clear,
so we don't worry about possible performance enhancements we might
make?)

* As for making schematic capture and layout separate threads of the
same process: they weren't designed together, don't share
datastructures or an API, and so therefore integration represents a
lot of work.

At the level of interoperability, the schematic capture program and
the layout program work great together. But they're not the same
program, and combining them into one program is not only difficult,
but is not necessarily a good thing. This is basically a FAQ:

http://geda.seul.org/wiki/geda:faq#why_does_the_geda_suite_seem_like_a_collection_of_random_programs_and_not_a_single_integrated_application

I'll note again that a board flow commonly found in the Boston area
is ViewDraw -> Allegro. ViewDraw is totally unrelated to Allegro, and
nowadays they are products of competitors. Nonetheless, this flow
works great, since ViewDraw has the ability to netlist to Allegro
quite easily. (Let's hope Mentor and Cadence don't try to break this
link moving forward.) Gschem and PCB have the same relationship: You
can netlist quite handily from gschem to PCB. Moving forward, you
will see backward annotation as well as a feature allowing you to
select a component in PCB and see the symbol light up in gschem.
Therefore, this business about "better integration" is just nonsense.

* As for this business about "toy student projects", my experience is
that a good chunk of boards are of the 6" x 8" 4-6 layer type, both in
academia as well as in the real world. Think test boards, knock-off
boards for the manufacturing floor, quick data acquisition boards,
amplifier modules, medium-sized microcontroller boards, connector
aggregation boards, sensor boards, protocol conversion boards, boards
for test fixtures, peripherial boards, PCI boards, audio boards, ham
radio boards, hobby robot boards, motor control boards, position
sensor boards etc. etc. etc. . . .

. . . That's the target audience for gEDA/PCB. Of course, some people
have created larger boards than that using the gEDA tools, and bully
for them! But my opinion is this: If you're designing a 15"x24" 20
layer router board with controlled impedance, high-speed,
matched-length differential busses, you should probably go out and buy
one of the fine high-end products from Mentor or Cadence.

* Next, you made this comment:

: When the typical desktop CPU comes standard with 10MB or better of L2


: cache, these issues might go away. Last time I checked, this was only
: available for high end Itianum processors, well outside the reach of
: most mortals in cost (or me right now).

My answer to you is: Which to you prefer, shelling out a few
thousand $$$ for a better computer to run a powerful open-source
design suite, or shelling out tens of thousands of $$$ to run a secret
source design suite (likely requiring a high-end work station anyway)?

* Finally, I'll point out that gEDA/PCB is an open-source project, so
people interested in new features are always welcome to submit patches
for incorporation into the code base. We get a large number of people
complaining about one or another imagined misfeature in the gEDA
suite. However, the ratio of code patches to suggestions/complaints
is pitifully small. I sometimes tell the folks with suggestions: A
patch is worth a thousand suggestions.

Stuart

Ian Bell

unread,
May 22, 2006, 3:00:28 PM5/22/06
to
Stuart Brorson wrote:
>
> At the level of interoperability, the schematic capture program and
> the layout program work great together. But they're not the same
> program, and combining them into one program is not only difficult,
> but is not necessarily a good thing. This is basically a FAQ:
>
>
http://geda.seul.org/wiki/geda:faq#why_does_the_geda_suite_seem_like_a_collection_of_random_programs_and_not_a_single_integrated_application
>

I think Kicad goes one better than gEDA in that it is a monolith
confederacy. Its major functional programs are separate but share a common
UI and data structure.

Ian

Stuart Brorson

unread,
May 22, 2006, 5:30:45 PM5/22/06
to
Ian Bell <ruffr...@yahoo.co.uk> wrote:

Maybe. I think that the user doesn't care about datastructures or
APIs. He just wants to design a circuit, and is usually oblivious to
the way the code works.

I do agree that newbies like Kicad better since they don't
leave the comfortable graphical environment to go from schematic
capture to layout. I also think that the way you drive Kicad is a
little more similar to the normal Windoze paradigm, whereas gEDA is
more unix-y. However, gEDA is changing this ... even now there are
some patches in CVS which make gEDA more Windozey.

On the other hand, I think that Kicad is a little buggier
than gEDA -- it segfaulted a couple of time during my hour or two
playing with it. Gschem never segfaults. Also, Kicad is more limited
IMHO. That is, gEDA/PCB scales nicely to large designs with lots of
schematic pages (many nets and many components). I am not sure Kicad
scales to more than one page (although it may and I missed that
feature). I think that gEDA SPICE netlister is much more
full-featured than Kicads (which can't import external vendor
subcircuit model files). Also, due to it's extinsible architecture,
gEDA can netlist to over 20 different file formats, including 4 or 5
commercial layout packages. Can Kicad do that? (Indeed, can you
write out a netlist native to Kicad's layout editor?) Finally, I
personaly like the fact that gEDA/PCB are connected via
writing/reading files. It makes it easy to break into the flow with
scripts if need be.

I am very glad that Kicad is around, and I have recommended it to
newbies who weren't up to using gEDA/PCB. Us gEDA developers have
played with it a little bit, and are very impressed with the UI
experience. Personally, I tend to see it as more
suited to smaller boards/student projects, but I may be wrong and it
may be just as capable as gEDA of scaling up. It would be interesting
to do a head-to-head comparison of gEDA/PCB vs. Kicad to see which can
handle larger designs, more layers, more nets, larger boards, etc.
Hmmm, an interesting topic for a FreeDog get-together.

Stuart

fpga...@yahoo.com

unread,
May 22, 2006, 8:16:26 PM5/22/06
to
Hi Stuart,

I'm also doing open source work in my free time, with very little
outside help, even though some three dozen people have asked to join
the project, and most have since been "fired" for failing to contribute
even at a discussion level (and there are a few more likely to be fired
for the same reason soon). Potentially helpful developers are certain
to look at the long list, and falsely assume there are enough people
for the project. http://sourceforge.net/projects/fpgac

The difference is that I listen very carefully to my users comments,
and suggestions, rather than argue with them that should should settle
for less, take it or leave it. I see the project as a reflection on my
skills, as the ability to do a reasonable job at real life projects
too. FpgaC has a long way to go to be commercial quality, and my goals
are nothing short of that, despite it's current (and numerous) short
commings.

I believe that hobbiests and small businesses (IE consultants) should
have access to EDA tools too ... without a $30K budget for a netlist C
compiler that has usable synthesis ability. Ditto for quality spice,
schematic, and pcb tools. When our imagination is limited by our tools
budget, our home projects, and the contracts we can bid for, are
severely limited to "toy" sized projects.

There are many, and the most notable, open source projects that don't
strive to produce a toy operating system, a toy compiler, a toy word
processor, a toy windowing system, but full featured industrial
strength projects every developer that contributes to can be proud of.

Stuart Brorson wrote:
> * From your experience, can you quantify how large a design must be
> before it begins to hit memory limits when using gEDA/PCB? How many
> nets/components? This information would be interesting to the
> developers. (Or if your observations are about general computer
> performance as opposed to gEDA/PCB, perhaps you could make that clear,
> so we don't worry about possible performance enhancements we might
> make?)

You know your design better than anyone else ... how many parts and
line segments will fit in a 128K or 256K or 512K L2/L3 cache? While it
might not seem a problem with a highly interactive (human is the major
delay) application, consider that it will be a problem as soon as you
start doing things to actually save the human time ... like
autorouting, or assisted drawing by dragging a rat, etc ... where
waling your lists will flush the caches and severely compete with X
that also has large memory requirements.

The GTK version of PCB has been "useless" for a year, and still is
today, because it generates 10-80 second delays with the mouse locked
up refreshing a drawing of a modest sized board. The older Xaw and the
current Lesstiff based version don't have that problem .... so the GTK
version pretty much is for toy designs only. Much of this delay is
memory thrashing.

The thought behind gEDA should be to build commercial quality tools,
capable of real design layouts ... motherboards, complex PCI I/O cards,
complex embedded systems designs, so that home hobbiests and small
businesses/consultants designs are not limited by their budget for
tools.

> * As for making schematic capture and layout separate threads of the
> same process: they weren't designed together, don't share
> datastructures or an API, and so therefore integration represents a
> lot of work.

I believe in do it once right. The more energy that you put into doing
it in a way that can not be used to do it right, is actually wasted in
the long term and of very little real value in that respect. If you
decide to limit your portion of gEDA to toy sized projects, someone
else someday will have the vision to replace your work completely, and
do it right. You create your own legacy with open source.

> At the level of interoperability, the schematic capture program and
> the layout program work great together. But they're not the same
> program, and combining them into one program is not only difficult,
> but is not necessarily a good thing. This is basically a FAQ:

depends certainly on your goals ... I face people every day that argue
for substandard, sometimes with very valid reasons from their
viewpoints. I also actively help others that set much higher goals, as
my time is not wasted, and the end product will be worth having my name
on.

> Therefore, this business about "better integration" is just nonsense.

Actually, I don't think so ... but you are doing the job ... do it your
way, and people will certainly remember you for it.

> * As for this business about "toy student projects", my experience is
> that a good chunk of boards are of the 6" x 8" 4-6 layer type, both in

I'm glad most major open source developers are not that short sighted,
or else linux, gcc, gnu tools, mozilla/firefox, open office, etc ...
would all be toys and not really usable as they are today.

> My answer to you is: Which to you prefer, shelling out a few
> thousand $$$ for a better computer to run a powerful open-source
> design suite, or shelling out tens of thousands of $$$ to run a secret
> source design suite (likely requiring a high-end work station anyway)?

It has never been either/or .... for Linux, for gcc, for gnu tools, for
mozilla/firefox, for open office, for KDE/Gnome, and hundreds of other
high quality open source projects whose commercial counter parts where
also very expensive just a decade a go.

> * Finally, I'll point out that gEDA/PCB is an open-source project, so
> people interested in new features are always welcome to submit patches
> for incorporation into the code base. We get a large number of people
> complaining about one or another imagined misfeature in the gEDA
> suite. However, the ratio of code patches to suggestions/complaints
> is pitifully small. I sometimes tell the folks with suggestions: A
> patch is worth a thousand suggestions.

not imagined ... from experience .... and from that experience I've
learned that when someone sets their sights for barely good enough,
they will always fail. When someone sets their sights very high, with
excellent standards, even when they miss the mark, what they have
produced will always be very noteworthy and worth the effort to finish
using the same high standards by others.

fpga...@yahoo.com

unread,
May 22, 2006, 9:04:19 PM5/22/06
to

Stuart Brorson wrote:
> I sometimes tell the folks with suggestions: A patch is worth a thousand suggestions.

Or you may be doomed to repeatedly failing to get it right, because you
failed to listen to anothers hard learned experiences, in the form of a
helpful suggestion.

Ales Hvezda

unread,
May 22, 2006, 11:30:33 PM5/22/06
to
Hi,

fpga...@yahoo.com wrote:
> I'm also doing open source work in my free time, with very little
> outside help, even though some three dozen people have asked to join
> the project, and most have since been "fired" for failing to contribute
> even at a discussion level (and there are a few more likely to be fired
> for the same reason soon). Potentially helpful developers are certain

Wow. Let me get this straight, you "fire" volunteer
contributers and/or developers who want to help out on your OSS
project? An interesting management approach. The gEDA project
(especially myself) really values anybody who contributes patches /
code / documentation / suggestions etc... Even if I don't accept
something, I still value the contribution. I would never consider
"firing" a volunteer who spends their valuable free time helping the
project out.


[snip]


> The GTK version of PCB has been "useless" for a year, and still is
> today, because it generates 10-80 second delays with the mouse locked
> up refreshing a drawing of a modest sized board. The older Xaw and the
> current Lesstiff based version don't have that problem .... so the GTK
> version pretty much is for toy designs only. Much of this delay is
> memory thrashing.


Could you post some hard data on how you came to these
conclusions? I'm sure the PCB developers (myself as well) would love
to see your data or experimental results. Thanks.


[snip]


> I believe in do it once right. The more energy that you put into doing
> it in a way that can not be used to do it right, is actually wasted in
> the long term and of very little real value in that respect. If you
> decide to limit your portion of gEDA to toy sized projects, someone
> else someday will have the vision to replace your work completely, and
> do it right. You create your own legacy with open source.


I have a very different reason for working on free software.
I don't write free software to "create a legacy" for myself. I write
free software to solve real world problems and maybe somebody else will
find it useful. I make a significant effort to keep my ego out of the
process as much as possible. I find this approach works best and over
the years there have been many valuable contributions to the gEDA
project.

-Ales


PS. If somebody has the "vision" to replace gEDA completely with
something way way way better, all the power to them! Maybe
they can leverage something from the existing code base and/or
learn from my many missteps. In the meantime, work on gEDA
continues...

--
Ales Hvezda
ahvezda -AT- seul.org
http://geda.seul.org

DJ Delorie

unread,
May 23, 2006, 12:06:02 AM5/23/06
to

"Ales Hvezda" <ahv...@yahoo.com> writes:
> Could you post some hard data on how you came to these
> conclusions? I'm sure the PCB developers (myself as well) would love
> to see your data or experimental results. Thanks.

I have a not-for-distribution sample board which demonstrates the 10
second pause he's referring to. So far, it looks like a "catch up
with mouse events" scenario. I've also experienced the slow pre-hid
Gtk that some people complain about.

fpga...@yahoo.com

unread,
May 23, 2006, 12:07:52 AM5/23/06
to

Ales Hvezda wrote:
> Wow. Let me get this straight, you "fire" volunteer
> contributers and/or developers who want to help out on your OSS
> project? An interesting management approach. The gEDA project
> (especially myself) really values anybody who contributes patches /
> code / documentation / suggestions etc... Even if I don't accept
> something, I still value the contribution. I would never consider
> "firing" a volunteer who spends their valuable free time helping the
> project out.

Anyone "who spends their valuable free time helping the project out"
stays. Anyone who doesn't have the time to write even a half of a page
of code in several months or contribute to discussion, clearly isn't
developing anything are they? It's a team effort ... those that fail
the team by failing to produce ANY THING AT ALL, aren't team members
are they?

I don't see your project with a long list of "developers" that have
never contributed to the project.

If some one asks to be listed and doesn't contribute at all, do you
allow them to keep their name on your developers list so they can puff
their resumes?

Frankly, some find themselves in over their heads and don't feel they
can contribute at a reasonable level, and I generally ask them to stay
and spend the time training them. At least those people have the
integrity to openly communicate, rather than those that don't answer
their email after the first week, or never deliver after making a
commitment to do their part.

> Could you post some hard data on how you came to these
> conclusions? I'm sure the PCB developers (myself as well) would love
> to see your data or experimental results. Thanks.

First are you even aware of the problem? Bug report:
1217807 pcb-20050609 is WAY WAY slower

Second, are you aware of the dynamics of the failure mode? ....

> I have a very different reason for working on free software.
> I don't write free software to "create a legacy" for myself. I write
> free software to solve real world problems and maybe somebody else will
> find it useful. I make a significant effort to keep my ego out of the
> process as much as possible. I find this approach works best and over
> the years there have been many valuable contributions to the gEDA
> project.

Obviously your ego is highly engaged to respond this way, as was
Stuarts, to jump in attacking suggestions on what PCB should be, and
never even mentioning gschem. Gert a grip fella, why in the hell are
you attacking me for making some constructive criticism, and responding
equally lively to Stuart's little pissy bit.

Ditto for you .... my comments are what PCB should be, I could care
less about the rest of the gEDA project for the most part.

> PS. If somebody has the "vision" to replace gEDA completely with
> something way way way better, all the power to them! Maybe
> they can leverage something from the existing code base and/or
> learn from my many missteps. In the meantime, work on gEDA
> continues...

I thought this was a PCB discussion ....

fpga...@yahoo.com

unread,
May 23, 2006, 12:14:05 AM5/23/06
to

That DJ is why I sent the list of constructive criticisms to you
personally, to keep the advocates from getting all bent out of shape
that someone would dare consider another view of the world.

fpga...@yahoo.com

unread,
May 23, 2006, 3:30:52 AM5/23/06
to
Now, back to the original topic at hand, before Stuart and Ales so
rudely created a gschem and gEDA turf war over schematics not being
part of PCB's charter -- I was unaware that gEDA had the right to
dictate design to the PCB project which existed long before gEDA. I'll
let DJ, Harry, Stuart and Ales work their turf war out ... leave me out
of it please.

I considered strongly adding schematics into PCB some four years back,
while Harry was still maintaining it himself out of JHU. When Harry
dropped off the face of the earth for a while, I even considered
starting a PCB project at sf.net, till I saw one day Harry had created
one. Harry had believed strongly that Schematics do not belong in PCB,
and as chief maintainer of the sf.net project, that was his choice.
Before starting FpgaC on sf.net, I was strongly tempted to pickup and
continue to support an Xaw version, and add Schematics in, as an sf.net
project called PCB2 ... fork the project since Harry and I have very
different goals and expectations about UI's and the types of designs
PCB should support/produce. I asked very clearly on the user forum for
PCB if Xaw was dead, trying to get a clear idea if it would remain a
crippled Gtk version ... and no answer. I was actually supprised to
find DJ had done a Lesstif variation, and had deviated strongly
(forked) from the old/Gtk UI.

In the end I decided I would be more useful digging TMCC out of the
grave, and bringing it forward a decade to be useful with todays FPGA
products.

TMCC/FpgaC suffers badly from the same working set problems I posed for
PCB. Very small changes in a project code transition FpgaC compile
times from a few minutes, to hours ... and in one case from 45 minutes
to over a day and a half, simply by exceeding working set sizes for L2
cache. Interestingly enough, the same C code does the same thing to
GCC at a slightly different boundry point.

Student, and other toy, projects frequently contain simple algorithms
that are ok inside a typical processors L2 cache size these day ...
that when the data set grows just slightly, fail horribly performance
wise. In this case, linearly searching a linked list works fine up to
about 90-95% of L2 cache size. When you exceed that threshold,
performance drops and run times increase roughly 10X or better because
of the nature of LRU or pseudo-LRU cache replacement policy.

Consider for example, a small cache of 4 "bins" of marbles taken from a
bowl of 300 marbles. If we first reference a certain red marble, it's
taken from the bowl and placed in a cache bin after searching the 300
marbles for it. We keep using, and replacing the red marble avoiding
the search in the bowl. Later we also use a green, blue, and yellow
marble, which take the three remaining bins in the cache. Because of
the nature of the task, we always use red, green, blue, and yellow in
that order, always taking from the cache, and replacing in the cache.

When our working set expands to five mables, we have a cache failure,
which goes like this. We access the red, green, blue, yellow marbles in
order from the cache, then we need a white marble. The red mable is
least reciently used so it's removed from the cache, and replaced with
the white marble. We then repeat our cycle, next needing the red mable
which is no longer in the cache, so we must fetch it from the bowl, and
due to the LRU algorithm, replace the green marble with the red
marble. However next we need the green mable, which forces the blue out
of the cache. Next we need the blue marble, forcing the yellow out of
the cache. Next we need the yellow, forcing the white out of the cache
... and so on, with every cache hit faulting, requiring a lengthy
access and search of the bowl.

LRU algorithms fail horribly with sequential searches of the cached
working set, resulting in a very sharp reduction in performance as the
working set is exceeded. In FpgaC's case, the primary data structures
are linked lists which are frequently searched completely to verify the
lack of duplicate enteries when a new item is created. When the working
set under these linked lists exceeds the processors L2 cache size, run
times jump by more than a factor of 10 for many machines these days ...
the ratio of L2 cach performance compared to memory performance. Thus,
depending on the host processors L2/L3 cache size, there are critical
points for FpgaC where the run times to compile incrementally small
increases in program size, jumps dramatically. The fix for this is
relatively simple, and will occur soon, which is to replace the linear
searches with tree or hash searches to avoid referencing the entire
working set to invoke the LRU replacement failure mode problem.

Similar problems exist at several levels in the responsiveness of PCB.
Any event which forces a search of the design space, will require the
working set to hold all the objects that are required to be searched.
When that working set increases past various cache sizes, noticable
increases in latency will result, to the point that they are visible in
the UI ... that point will vary depeding on the particular machine
being used (L1/L2/L3 cache sizes, and the relative latency of reference
for faults). Developers who only use and test on a fast processors,
with large caches, and fast native memory, will not notice extremely
jerky performance that someone using a P2 333 celerion (128K cache)
with a 66mhz processor bus and fast page mode memory will encounter.
Slightly larger designs running on 4Ghz processors with 512K caches
will fail equally noticably with a design some 4-10 times larger.

Certain operations will fail harder, those which invoke a series of X
output, as they will also incure the memory overhead of part of the
kernel, some shared libraries, the X server, and display driver in the
"working set" for those operations. While a 512K cache is four times
larger, the available working set is Cache size minus X working set,
meaning that for small cache sizes there might not be much working set
left at all, while doubling the cache size may actually increase the
usable working set by 10X or more.

Just taking a guess, PCB + kernel + Xaw + Xserver, probably has a
working set something around or slightly larger than 128K for very
basic PCB tasks. Thus, we will see cache flushing and reloading between
X calls, and locally ok cache performance at both ends. As the L2/L3
cache grows to 512K this is probably less of a problem.

What does become a problem is when the PCB<==>X working set get
continually flused by every call, such as making a long series of
little calls to the Xserver, faulting all the way to the X server, and
faulting all the way back ... calling performance drops like a rock ...
factor of 10 or more. This happens when the task at either, or both, of
the PCB or X server end requires a slightly larger working set, making
the total working set for LRU into worst case failure mode.

I suspect that the Gtk failure modes do this, by including Gtk overhead
into the working set, such that every PCB to Gtk to X server call
faults round trip, and runs at native memory performance. The reason I
believe this is that in my testing a 550Mhz PIII machine with SDRAM is
only about twice as slow, as a 2GHz P4 machine with DDR SDRAM, in this
failure mode ... rather than the 4-6X normal compuation difference when
running at CPU speeds from L1 cache, or even L2 cache.

With synchronous calls to Gtk and the X server, it's difficult for PCB
to keep it's event processing in real time.

I have a several day class I used to regularly teach that discusses in
detail, designing for hysteresis problems that occure with step
discontinuties of the processor load vs thruput function that is quite
useful for recognizing and designing architectural solutions to
problems of this class.

So ... application architecture in respect to working set sizes is a
critical performance issue. Algorithm choices which conserve working
set, and avoid sequential LRU faulting are a critical issue for design.
And carefully managing data set representation for compactness, even at
the cost of a number of cpu cycles for packing/unpacking can greatly
push off the working set failure threshold with care full design.
Consolidation of processes to minimize frequency and location of long
working set calls to external processes (like including them as threads
if necessary) are critical to align them with places in the UI
interaction where latency hiding in transparent, rather than highly
visible.

Stuart Brorson

unread,
May 23, 2006, 6:47:30 AM5/23/06
to
Dude, way to flame!

Sorry for the top posting, but this thread suddently got kind of
long . . . .

I'll let others decide whos rude and sensitive; I didn't mean to come
across that way. I do try to make a point out of defending gEDA since
there is a lot of FUD out there, a lot of it due to user cluelessness
i.e. newbie college freshmen, or worse: grad students :-).
I think it important to place verifiable facts (like examples of
boards done with the gEDA/PCB, and software design considerations)
against generalized complaints about usability (like "it's too slow"
or "too hard to use"). If it sounds like flaming, or an ego thing,
that't not the intent.

In contrast to some of the flames we get, you do appear to be quite
clueful, and have done open-source stuff yourself, so my hat is off to
you.

As for "a patch is worth a thousand posts", well, I stand
by it. I see you only *considered* patching PCB. Hmmmm . . .
But OTOH you started your own project, so you're doin' alright.

As for all your points about L2 cache, scalable algorithms, and
synchronous calls from GTK to X, that's very nice. But have you
verified any of them in the *specific* instance of PCB's code, or are
they just some ideas? Do you have any specific files/line numbers
where you see sub-optimal loops? You see, that's my point:
Donating general ideas is very easy, but doing implementation is
difficult. However, implementation is what counts. As an
open-source guy, I'm sure you know this.

As for the unfairly maligned GTK port of PCB: It was done by popular
request by a developer who kindly donated his valuable time to the
project, just as Ales, DJ, and you do with your respective projects.
This port brought PCB into a widget set that didn't look like 1985,
and also provided some usability enhancements. Any slowness is due to
the GTK widget set.

That GTK version of PCB worked fine (i.e. reasonable response time)
for me on normal desktop workstations (i.e. I didn't need a
supercomputer), but my boards tend to be of the middle-level size and
complexity. If you have boards which radically slow down PCB, that's
an interesting factoid for the general EDA community: You're doing
some pretty large designs using gEDA. Care to share component or net
counts on those boards? It would interest some of the nay-sayers
here.

(A few people did complain about the GTK port's speed when it came
out. Perhaps the speed did depend upon each computers' detailed
architecture, cache usage, and stuff like that. In
any event, DJ and team have re-architected PCB to support multiple
GUIs, including GTK, Motif, and XAw. It should be getting even
speedier now.)

Anyway, this discussion is devolving, and I have made my simple points
already:

* GEDA is very usable -- and is often used in the real world -- for
board designs up to mid-level complexity.

* "A patch is worth a thousand posts." Put another way: ideas are
cheap, implementation is what counts.

Therefore, with that I'll bid this thread farewell.

Stuart


fpga...@yahoo.com wrote:
: Now, back to the original topic at hand, before Stuart and Ales so

Ales Hvezda

unread,
May 23, 2006, 7:31:03 AM5/23/06
to
Hi again,

[snip]


> Frankly, some find themselves in over their heads and don't feel they
> can contribute at a reasonable level, and I generally ask them to stay
> and spend the time training them. At least those people have the
> integrity to openly communicate, rather than those that don't answer
> their email after the first week, or never deliver after making a
> commitment to do their part.

I guess I don't worry too much about people who either commit to do
something and then never deliver or never contribute or get busy with
real life. Whatever, I'm just happy when people contribute.

[snip]


> > I have a very different reason for working on free software.
> > I don't write free software to "create a legacy" for myself. I write
> > free software to solve real world problems and maybe somebody else will
> > find it useful. I make a significant effort to keep my ego out of the
> > process as much as possible. I find this approach works best and over
> > the years there have been many valuable contributions to the gEDA
> > project.
>
> Obviously your ego is highly engaged to respond this way, as was
> Stuarts, to jump in attacking suggestions on what PCB should be, and
> never even mentioning gschem. Gert a grip fella, why in the hell are
> you attacking me for making some constructive criticism, and responding
> equally lively to Stuart's little pissy bit.

Attacking you? Huh? I was very careful in my word choice. I was
only refering to me, not you. My only point is that everybody has
their own different reasons for doing OSS/free software.

Anyways, interesting thread, but this is where I stop off as well. :)

Good luck with your project!

-Ales

fpga...@yahoo.com

unread,
May 23, 2006, 8:28:05 AM5/23/06
to

Stuart Brorson wrote:
> I see you only *considered* patching PCB. Hmmmm . . .

Ought to do your homework first ... I've sent Harry patches in the
past.

> Donating general ideas is very easy, but doing implementation is
> difficult. However, implementation is what counts. As an
> open-source guy, I'm sure you know this.

Actually, if the only form of bug reporting acceptable is fixes, it
will be a very long time before your project is complete and stable.
I've provided demonstratable boards to Harry, DJ, etc that demonstrate
the unreasonable slowness.

> As for the unfairly maligned GTK port of PCB: It was done by popular

Actually, not unfairly maligned. A clean Gtk port would not have broken
the existing Xaw usage until everybody agreed it was a stable
replacement. One of the boards I had here last year, took well over a
minute to redraw under Gtk after a simple pan, that was under a second
with Xaw ... that's not just SLOW, that's unusable. Doesn't mater how
pretty its GUI looks.

> * "A patch is worth a thousand posts." Put another way: ideas are
> cheap, implementation is what counts.

When you stop listening to other peoples experience, the only other
choice is to make all the same mistakes yourself along the way, and
hope you actually learn the lessons too, and not do as many less
clueful folks do, repeat the same old mistakes for ever, because thats
the way it's always been done.

There are far more clueful people in the world willing to share
experience, if not treated like clueless right off the bat. There was
no reason to jump into this discussion complaining about not supporting
gschem and gEDA ... I don't like gschem, and don't use it. one more
bastard drawing UI to have to learn. The whole eEDA cludge between
gschem and PCB is painful at best. The lack of a consistant UI was one
prime complaint to DJ.

What I was talking about, are technical reasons for doing it right as
part of PCB, which has nothing to do with gschem, or the turf you where
defending by mistakenly attacking me.

In don't think it's nearly as difficult as you were complaining about,
and I've already looked. Maybe it's just because I'm not easily scared
by complexity, and do a general itterative "Keep It Simple Stupid"
(KISS) approach to tackling difficult projects until I become much more
experienced with the code. As I noted to Harry several years back ...
all the pieces are there in pcb already ... just startout by treating
schematic symbols as footprints and keep dual footprint libraries
initially (schematic and pcb) and two wire tracks for the design at
first. Then clean up the internal interfaces slowly, to include a
reasonable formal architecture. Things like crossed wires not forming a
connection, unless explictly joined. Like linked references between
schematic symbols, pin function lists, and actual foot prints based on
industry standards and vendor data. Things like automatic cross
notation between instances of the netlist (traces/rats). Be able to
pull spice data for not only the design, but the implmentation traces
as well, tied to vendor part data. Maybe not all the first year, or
even the second or third.

You end up with ONE UI, one project file, and hopefully one consistant
parts library.

Years ago, you did a schematic, then laid out the board. These days
with FPGAs, I frequently do the PCB then the schematic as nearly all
the pins are assigned based on ease of layout, not predefined as you
have with comodity parts. It becomes very useful no days, to have both
the schematic and the pcb up at the same time, and draw both at the
same time, one net at a time.

Used Sony 21" monitors are $50/ea on ebay and dual/triple/quad head is
supported in both Windows and Linux. A dual processor 1Ghz system with
4GB of ram is under $500, which makes one heck of a CAD system under
linux. My desk has three SGI GDM-5011P's on it which take VGA in. Most
peoples Best Buy or Circuit City systems cost more than I paid for the
parts on eBay. With large glass tubes not being "cool" these days,
high res sony monitors are a MUST BUY for any hardware hobbiest doing
CAD while the supply lasts. Get several and use them till they die.
I'm 54, and find using large fonts makes web surfing easier on my eyes
when I'm not busy doing another design.

I'm an avid hardware hobbiest, and most of the really dense and fun
boards I've done are for personal research. I mostly get paid for doing
contract software work and networking stuff, with hardware projects a
secondary part of my living. I like hardware, and don't turn down the
contracts when I can get them. When two pieces of a 16"x22" six layer
SMOBC panel are $185 from certain suppliers, that will hold a half
dozen projects ... doing quality PCBs is both cheap and fun. I
frequently run a homebrew club from my home, and share panel runs,
making two peices of most projects $20-60. And about double that if we
need to do stencils for both sides. I will be mfg boards as a business
later this year, with dual smt pick and place lines and N2 relow ovens.
Mostly to produce my own research boards, plus low cost hobby and
student FPGA project boards from recycled parts that I have extra. The
lines were a couple grand off eBay, and picked up initially to build my
home FPGA super computer boards -- which is another fun project I've
been working on for a few years. Several thousand FPGAs, MIPS/PPC CPUs,
memory, water cooling and a lot of power :)

The boards sent to DJ and Harry to demonstrate the Gtk slowness are all
proof of concept designs from my own research projects, some of which
I've also sold a few of. I think DJ thinks they are "interesting" too.
So when Stuart was getting off that anyone that needs more than a toy
design that can be done with the crippled student version of various
demo products, I pretty much feel he doesn't have a clue what real
hardware geeks want to do in their spare time with $50 of recycled eBay
parts :)

I'll be going back to grad school soon, and need my "home computer" for
research :)

DJ Delorie

unread,
May 23, 2006, 9:22:06 AM5/23/06
to

[subject changed to reflect reality, and most people aren't interested
in the internals of the pcb program. Please remove the [was...] when
replying - DJ]

I think the key to the gtk pcb's sudden slowdown can be found in
queuing theory. As you move the mouse, the X server generates events.
They come at a certain pace, and you deal with then in a certain time.
The size of the queue of events is determined by the input rate and
completion rate. One interesting rule - when the input rate exceeds
the completion rate, the queue eventually becomes infinite. This
"trip point" happens when the redraw exceeds a certain complexity,
such as the sample board, and depends on your hardware speed too.

The lesstif HID was designed for my 400MHz laptop, so I went to great
lengths to avoid this problem (having been stung by it before). It
does two things to avoid the queue.

First, I combine motion events. When I get a motion event I remove
all motion events from the queue and query the *current* mouse
pointer. Thus, if the system is busy, I end up skipping events to
keep up.

Second, I redraw the screen only when the program is otherwise idle.
The event handlers only update the data structures, they don't usually
draw on the screen, just set a flag to do so later. When the event
queue is empty, I test the flag and, if set, *then* I redraw the
screen. The net result is, if the redraw takes 0.1 second, the screen
will be done redrawing 0.1 second after you stop moving the mouse.

Also, note that PCB's core uses an rtree to store the data it needs
for a redraw. If you're looking at the whole board, you have no
choice but to go through the whole data list. However, if you zoom
in, the working set shrinks to only those objects that are visible.

Ian Bell

unread,
May 23, 2006, 10:16:50 AM5/23/06
to
Stuart Brorson wrote:
>
> On the other hand, I think that Kicad is a little buggier
> than gEDA -- it segfaulted a couple of time during my hour or two
> playing with it. Gschem never segfaults.

Experiences clearly differ. Kicad has never segfaulted for me but gEDA has.

> Also, Kicad is more limited
> IMHO. That is, gEDA/PCB scales nicely to large designs with lots of
> schematic pages (many nets and many components). I am not sure Kicad
> scales to more than one page (although it may and I missed that
> feature).

You did. I does hierarchical schematics.

> I think that gEDA SPICE netlister is much more
> full-featured than Kicads (which can't import external vendor
> subcircuit model files).

It is, but is is more mature.

> Also, due to it's extinsible architecture,
> gEDA can netlist to over 20 different file formats, including 4 or 5
> commercial layout packages. Can Kicad do that?

Not 20, 5 at present.

> (Indeed, can you
> write out a netlist native to Kicad's layout editor?)

Yes.

> Finally, I
> personaly like the fact that gEDA/PCB are connected via
> writing/reading files. It makes it easy to break into the flow with
> scripts if need be.
>

Kicad is too.

> I am very glad that Kicad is around, and I have recommended it to
> newbies who weren't up to using gEDA/PCB. Us gEDA developers have
> played with it a little bit, and are very impressed with the UI
> experience.

Interesting. I always found the gEDA UI very awkward and counter intuitive.
Kicad does pretty much what I would expect.

> Personally, I tend to see it as more
> suited to smaller boards/student projects, but I may be wrong and it
> may be just as capable as gEDA of scaling up. It would be interesting
> to do a head-to-head comparison of gEDA/PCB vs. Kicad to see which can
> handle larger designs, more layers, more nets, larger boards, etc.
> Hmmm, an interesting topic for a FreeDog get-together.

Interesting yes, but given Kicad's relative youth it's hardly a fair
contest. It's current main limitations regarding scalability are the number
of layers and the lack of a decent autorouter.

Ian
>
> Stuart

fpga...@yahoo.com

unread,
May 23, 2006, 7:39:00 PM5/23/06
to

DJ Delorie wrote:
> I think the key to the gtk pcb's sudden slowdown can be found in
> queuing theory. As you move the mouse, the X server generates events.
> They come at a certain pace, and you deal with then in a certain time.
> The size of the queue of events is determined by the input rate and
> completion rate. One interesting rule - when the input rate exceeds
> the completion rate, the queue eventually becomes infinite. This
> "trip point" happens when the redraw exceeds a certain complexity,
> such as the sample board, and depends on your hardware speed too.

This is right on, and the queuing theory issues are a fundamental partt
of understanding the hysteresis problems presented by exceeding working
set. If it takes 0.1 seconds (as you state below) to redraw the screen
while running cleanly out of l1/L2/L3 cache, then you can accept mouse
events at 100 per second and keep up with the users motion without
creating a backlog that grows queue length. The problem starts when the
working set is exceeded, processor bandwidth drops due to cache
faulting, and all of a sudden it starts taking 10 time longer to redraw
the screen.

> The lesstif HID was designed for my 400MHz laptop, so I went to great
> lengths to avoid this problem (having been stung by it before). It
> does two things to avoid the queue.

I actually used to use PCB on a 233mhz Compaq a few years back when I
was traveling, and it was pretty usable under RH9, a 2.2 kernel, and
96mb of memory. Light paging traffic, mostly caused by crond which I
would normally turn off to get rid of the jerkiness.

> First, I combine motion events. When I get a motion event I remove
> all motion events from the queue and query the *current* mouse
> pointer. Thus, if the system is busy, I end up skipping events to
> keep up.

Bravo, that is the first step in countering working set problems ...
reduce the total work load linearly, and work harder in each cache
context before faulting to another. By combining motion events you slow
down the rate of context switching to the X server, and do more work
per context switch.

> Second, I redraw the screen only when the program is otherwise idle.
> The event handlers only update the data structures, they don't usually
> draw on the screen, just set a flag to do so later. When the event
> queue is empty, I test the flag and, if set, *then* I redraw the
> screen. The net result is, if the redraw takes 0.1 second, the screen
> will be done redrawing 0.1 second after you stop moving the mouse.

AKA latency hiding, by defering work to a less critical time.

> Also, note that PCB's core uses an rtree to store the data it needs
> for a redraw. If you're looking at the whole board, you have no
> choice but to go through the whole data list. However, if you zoom
> in, the working set shrinks to only those objects that are visible.

That was visible in the first Gtk release last year, that zooming in
would reduce the latency lag, and at some point it would suddenly
become realtime again.

Linked lists and trees frequently have a very poor memory usage
efficiency with small data structures and lots of pointer overhead,
combined with a kitchen sink problem (everything that is related to an
object is tossed into the same structure). FpgaC suffers from this
pretty badly.

Let me explain ... the problem is that to get to one or two words of
data, we frequently reference a structure that has maybe a dozen
related variables that are not used for every operation - plus a couple
pointers for linking the objects, all without cache line alignment.
When working sets start thrashing caches, there are smarter ways to
conserve working set by getting better memory utilization:

1) separate out variables which a heavily searched/used from those that
are not critical, so large working set, latency critical operations
fetch from memory only what is needed. Using this strategy, the
attributes necessary for drawing are in one object, and other
non-critical attributes are in a secondary object. It might even be
useful to compact some of these attributes in the latency critical
object, and keep a non-compact native form in the non-critical
attribute object.

2) use segmented tables (arrays, vectors) instead of single object
linked lists where possible to avoid the pointer overheads. Using this
strategy there may still be linked lists and trees, but each leaf node
includes a dozen or more objects in a table. Thus the ratio of usable
data to pointer overhead greatly improves.

3) Use some care in designing and allocation of your objects so that
they do not span multiple cache lines. Since a full cache line is
read/written as a memory operation, when an object uses the end of one
and the beginning of another cache line, two cache lines are partially
used, which cuts memory bandwidth in half.

using these strategies can improve working set performance by a factor
of 3 to 10, and application performace once the working set exceeds
cache sizes by 3 to 20 times.

One last tidbit ... dynamically linked shared libraries have signficant
working set bloat and poor cache line balancing ... it's sometimes
useful to statically link to get better cache performance ... but that
is another long discussion about why.

Toy student applications don't need to worry about these problems most
of the time. Larger production applications where interactive
performace and batch run times are import, frequently can not avoid
these optimizations.

My two cents worth from 30 years of performance engineering experiece
from fixing bloated applications.

DJ Delorie

unread,
May 23, 2006, 8:04:34 PM5/23/06
to

fpga...@yahoo.com writes:
> set. If it takes 0.1 seconds (as you state below) to redraw the
> screen while running cleanly out of l1/L2/L3 cache, then you can
> accept mouse events at 100 per second

10 per second.

> and keep up with the users motion without creating a backlog that
> grows queue length. The problem starts when the working set is
> exceeded,

Even simpler. If you're getting 10 mouse events per second, and it
takes 0.099 seconds to process them, you're OK. If it takes 0.101
seconds to process them, you're toast. The trip point isn't filling
cache, it's just that the redraw finally takes longer than the
available time, even if it's still running cleanly out of cache.

> By combining motion events you slow down the rate of context
> switching to the X server, and do more work per context switch.

More importantly, I redraw the board less often. Redraws are
expensive no matter how much cache you have.

> 2) use segmented tables (arrays, vectors) instead of single object

PCB does this.

fpga...@yahoo.com

unread,
May 24, 2006, 8:47:06 AM5/24/06
to

DJ Delorie wrote:
> I have a not-for-distribution sample board which demonstrates the 10
> second pause he's referring to. So far, it looks like a "catch up
> with mouse events" scenario. I've also experienced the slow pre-hid
> Gtk that some people complain about.

Ok DJ,

I spent all night playing with the Gtk current release version,
including striping the board I sent you down to the PCI frame and a few
hundred connections around the edge connector, cap's, and the like. All
other chips and connections removed from the board. Resulting
complexity is less than a typical microprocessor student board.

It still totally lags, and is in short a piece of crap runing on a
uniprocessor 2Ghz P4 with 4GB.
Of course, the lesstif version runs like a bat out of hell without any
problems, on either this cut down example, or the full example I sent
you.

So, in short ... it's NOT a "catch up with mouse events" scenario at
all, as moving the PCI frame to the right as before has nearly the same
8-10 second redraw all the rubber banded lines problems, and it takes
equally long to redraw it back original with "u" key for undo ... that
is NOT a catch up with the mouse problem.

Trying to drag a bounding box is suffering badly with 1/2 to 2 second
delays on mouse movements, cursor left/right movement and pan with
arrow keys lags badly, in short the whole thing just lags like hell
with a minor toy level design with NO components other than a couple
dozen caps, and a few hundred wires.

A bunch of other things are total broke as well. Try tab to flip to the
back side, and drag out a bounding box to select a region. The box
frame isn't clipped, scaled or mirrored to the area the mouse drags out
... and neither is the region that actually selects.

If you select a large number of wires, such as the right side of the
ProofOfConcept board I sent,
the entire right half ... 3 collums, select, and pull down delete
selected, it chugs away for a long time ... hitting undo chugs away
again for a long time.

None of this happens with the old Xaw version, or the new Lesstif
version ....

so in short ... the Gtk version just plain sucks rocks after a year of
development as the prime recomended default release canidate.

Your Lesstif version has all the performance of the original Xaw
version ... so my hat's off to DJ.

fpga...@yahoo.com

unread,
May 24, 2006, 9:19:58 AM5/24/06
to

DJ Delorie wrote:

> More importantly, I redraw the board less often. Redraws are
> expensive no matter how much cache you have.

Does the Gtk version have an extra flush the display list to the view
port call someplace in the main processing loop ... it's display
behavior is far too agressive wanting to updat the display.

DJ Delorie

unread,
May 24, 2006, 9:49:10 AM5/24/06
to

fpga...@yahoo.com writes:
> So, in short ... it's NOT a "catch up with mouse events" scenario at
> all, as moving the PCI frame to the right as before has nearly the
> same 8-10 second redraw all the rubber banded lines problems, and it
> takes equally long to redraw it back original with "u" key for undo
> ... that is NOT a catch up with the mouse problem.

I think this relates to the "deferred redraw" thing I mentioned about
the lesstif hid. In this case, what's happening with the gtk hid is
that the view is being refreshed for EACH trace that gets moved. Even
at 30 FPS, for hundreds of traces that's many seconds to redraw.

If you want to take a stab at adding deferred refresh to the gtk hid,
it would be much appreciated.

fpga...@yahoo.com

unread,
May 24, 2006, 9:57:12 AM5/24/06
to

DJ Delorie wrote:
> If you want to take a stab at adding deferred refresh to the gtk hid,
> it would be much appreciated.

That's probably the solution for one problem. It has a number of
others, all related to excessive lag in responsiveness, some of which
don't have to do with drawing .... like a huge lag when you take the
cursor off the view port to another window, and back. I suspect this
is a number of problems, that are additive, not a single quick kill
.... unless it's a single extra viewport update that got left in from
debugging a year ago.

fpga...@yahoo.com

unread,
May 24, 2006, 10:04:21 AM5/24/06
to

fpga_t...@yahoo.com wrote:
> DJ Delorie wrote:
> > If you want to take a stab at adding deferred refresh to the gtk hid,
> > it would be much appreciated.

There are too many other things that are broke, like the bounding box
problem on the back side, the lag on bounding box, arrow key
activities, and the like that all smell like this project was never
finished ... somebody quit in the middle of the port without doing a
good job of debug and checkout.

Your version works ... with some nits you are already aware of.

I think I'll go back to 050318 until you guys are finally done ... it
works just fine too, and I've grown skilled at using it's UI ... yours
has everything different ... from mouse actions, menus, pull downs, etc
... and you don't seem done yet.

0 new messages