Cindy
George
T
That's also nice. I had in mind the other 'used to be' market leader program in DTP layout.
Cheers
George
If you mean Adobe did it first, that's incorrect. Two preceding
implementations that come immediately to mind are David Kindersley's
optical computer (1976) and URW's Kernus software. There are
undoubtedly others.
Toby
(The other important typographic feature of InDesign, the "multiline composer", is of course the algorithm invented by Donald Knuth and implemented in his TeX typesetting system in the early 1980s. I have marvelled for 15 years why this method was not sooner adopted in WYSIWYG applications.)
Toby
For optical kerning the way it was implemented by Adobe, I am not sure I 've ever seen that before. I mean 'in practice'
As for 'multiline composer' it was so in ID1.5. In ID2.x it became 'paragraph composer' (I would like to keep control of the lines as a parameter).
And it used to be Mergenthaler Linotype's 'System5' that running on Prime minicomputer back in 1980s its 'Book pagination' software it could do the same trick but for a complete book, re-iterating for hours everything (including footnotes) till a perfect result - all specifications and parameters taken care of.
What we have now is a far cry from that software and what it was able to do - miracles.
So much for evolution and re-inventing the wheel.
thanks
George
George
Are you sure the Prime system broke paragraphs using a comparable algorithm to Knuth's? - if so, maybe they didn't publish the research behind it. Iterative optimisation of *page* breaks is conceivable; TeX also does that. Maybe FrameMaker can, I do not know. InDesign still does not, of course; many of these refinements are inevitably lost since the mainstream won't accept code-based setting.
Re: Kernus, that is correct; however the method (and end result) is similar... I just wanted to dispel the impression that Adobe was first with mechanical methods for optical kerning.
Toby
I am not sure about Prof. Knuth being aware of System5, or was it the other way around (Linotype being aware of Prof. Knuth's work?
Fact is that the system was doing superb book pagination including excellent footnotes, running titles and headers e.tc. plus optimum paragraph composition.
It's a glory to the old masters of programming accomplishing feats that we miss today (and they were limited to hardware they had at the time). This has to be said.
Regards
George
Three cheers for what you said.
Happy Holidays to you and yours
George
If such feats (and Professor Knuth's work) prove anything, it's the quality of the *idea* behind the implementation that matters.
As a famous George was paraphrased, "Those who don't understand UNIX are doomed to reinvent it, poorly." (Henry Spencer), an aphorism which also applies to typesetting systems. The moral being, let's not lose touch with such history!
Toby
The other important typographic feature of InDesign, the "multiline composer",
is of course the algorithm invented by Donald Knuth and implemented in
his TeX typesetting system in the early 1980s.
I've been curious about this. Is the Adobe Paragraph Composer (as it's called in InDesign) actually the same as Knuth's composition algorithm from TeX? They do share the characteristic of analyzing multiple lines in order to optimize hyphenation and word breaks. They do have some differences though. I haven't used TeX in years, but I recall that if the scoring system ("badness") was too far out of range, TeX would give up and put an ugly black box near the offending line, and also emit a warning to its output log. InDesign does nothing like this, of course, but it will silently break some of the constraints you might have imposed in your H&Js, such as maximum word spacing.
Is anybody willing to share the specifics of the Adobe Paragraph Composer algorithms and compare them with TeX? I'm curious.
s'marks
This is only the case for 'overfull' hboxes. Of course you can influence
the typesetting system in a way (how useful this might be) that you will
never see any black box again (btw it's just a visual marker so that you
can see the problem; and again you can control all aspects of this). And
be aware that InDesign also displays paragraph/type problems by
colorizing them ;-)
InDesign's advantages (regarding typesetting/type handling) are
obviously 'optical kerning' and---if I remember correctly---handling of
rivers (and these features put InDesing more in the direction/succession
of hz than TeX). On the other hand TeX (in this case pdftex by Han The
Thanh) has more flexibility with char protruding (on the left and right
margin) and font expansion as you can control every aspect not only font
dependent but also char dependent. For example sometimes it would be
better that a hyphen of a specific font only protrudes by 75% into the
margin. ((There are a lot of micro-typographic aspects which would be
interesting to discuss with InDesign developers.))
Best regards,
Ulrich Dirr
I've seen it written that URW and Zapf did use Knuth's work.
Neat info, Dominic. Do you happen to have a link or other source for that? I had never made the connection before, but it does make sense. I'd like to read up on it if possible.
Gene
<http://www.pvv.ntnu.no/~aslakr/hz.html>
<http://members.aol.com/willadams/books-e-tex.html>
and see the info on TeX at:
<http://ftp.linux.cz/pub/tex/local/cstug/thanh/hz/description.pdf>
BlueSky also has a version of TeX's H&J algorithm for QuarkXpress on Mac. It's called Brekaers, but I don't know how well it works. See <http://www.bluesky.com/> .
T
Thanks for the links. It's interesting to see the common intellectual threads running through all of TeX, HZ, and InDesign.
s'marks
Well, I consider the justification routine that Knuth invented for TeX to be the largest of many brilliant innovations in the program (the rest almost all dealing with the typesetting of mathematics).
The fact that he a) published his algorithm in a clear (if you know programming) documentation, and b) placed the program in the public domain, meant that other programs were certain to follow his path.
(I have used BlueSky TeX ages ago, and it was pretty good. I haven't used the Quark plugin that Dominic mentioned, although I have good faith in the company generally.)
Yes, the new TeXLive on DVD is great. If you want to test the more
advanced typographic features then try Han The Thanh's pdftex. For
motivating look at http://www.tug.org/texshowcase/ to see nice examples.
Latex is basically a macro program for TeX. A true typesetter would not want to use it. It is something that a TeXpert builds for a consistent style, such as a journal, that mathematicians can input for.
In plain TeX changes of this type of much easier, although occasionally they can be infuriatingly complex. The beauty of TeX is that the source code is available, so you can do "anything" with it, if you go deep enough in. Even InDesign and Quark have limitations if you want to do something the programmer hadn't thought of.
For the composition of this publication about the work of David Kindersley
the type face `Optima' was used in connection with David Kindersley's
LOGOS spacing program. As demonstrated here, with phototypesetting you
have the possibility of perfect intercharacter spacing.
When designing `Optima' in 1952 I had to avoid as much as possible the
kerning of too many characters in metal, for casting reasons, and under
the circumstances at that time it was not always easy getting good letter
combinations. I spent a lot of time in research and tests on legibility.
It is very important that the printed alphabet should be easily legible;
this is after all the main purpose of letters within a text. Letters are
the bridge transferring the message of an author to the reader in the
most convenient way.
We should not wonder that so many people nowadays -- particularly young
people -- don't like to read. The presentation of letterforms in printed
texts, and even more on TV or personal computer screens, ignores completely
the laws of legibility. One of the rules is that letters should be picked
up quickly and accurately. Because this so often does not happen, in the
end pictures are preferred.
Today's fashion for tight spacing is one of the problems. Too narrow spacing
between characters taken to the extreme slows down the speed of reading.
Computer-generated characters can seldom be read comfortably; apart from
the poor design of the letterforms themselves the distances between characters
are uneven.
Formerly, as an apprentice in a print a compositor had first to learn
how to space caps, and to correct poor spacing of words in large sizes.
Using a file he kerned letters with great care and patience. Today with
our modern phototypesetting systems we should be able to get the most
perfect intercharacter spacing. But why do we still see so many bad examples
everywhere? Books are not being designed in a way which invites us to
read them.
We are not so presumptuous as to wish to educate a reader, but we should
try to open his eyes to good typography and even spacing between letters
-- human letterspacing as I would call it. There is a perfect letterspacing
program already available, David Kindersley's LOGOS spacing program, which
I would like to see become an industry standard and adapted to all typesetting
systems. It is very important, and furthermore it is needed. No one could
create such a program better than an artist like David Kindersley, experienced
in handling all kinds of letterforms. It is high time we stopped ignoring
the studies in legibility and the findings of such important personalities
in typographic research as Bror Zachrisson in Stockholm and Willem Ovink
in Amsterdam. David Kindersley's LOGOS works. There is no excuse any more
for bad intercharacter spacing or imperfect kerning.
(Emphasis mine.) I do not know when and where this was published. The original source on the web (bluefuzz) seems to have disappeared but there is a copy of the text here <http://usuarios.lycos.es/ceadetecnologia/legibilidad.html>.
I also kept some notes by David Kindersley himself.
Spacing in theory
The proper spacing of alphabets of capitals and lower cases should be
such that the texture of the printed page is even. After 500 years of
printing this should not need saying. Yet present methods have become
so wide-spread and so accepted that, in spite of being contrary to the
behavior of the eye, they remain with us and look like being fossilised
for years to come.
One reason for the continuing existence of the method used today is the
influx of computer programs into this business, which has led to even
worse results. This is a great pity because the computer, properly used,
can at last produce what the eye deserves.
Existing methods allot measured distances to the left and right of a letter
without any consideration of the clear space within a letter. Surely our
experience of the erosion of black image by white space should inform
us!
``Each type letter, wherever it goes, carries along with it two fixed
blank spaces, one on each side. And, of course, each one of the 26 is
likely to be placed alongside any one the other 25 with their fixed blank
spaces ... the letter shapes occur in groups of similars etc. etc.''
Thus wrote W.A. Dwiggins to a friend. I am a great admirer of W.A.D.,
but surely we should understand that his advice belonged to a different
age where printers everywhere made use of metal type. The aim of the game
is to space letters and not rectangles. It is not necessary today -- nor
was it so with the scribes and early printers.
Kerning seems to generate a great deal of heat amongst those who were
trained in the use of metal type. The word should be forgotten when spacing
letters. For a scribe or a draughtsman of letters the overlapping of one
letter by another is simply spacing. He will almost certainly be unaware
of the word ``kerning''. And so it is with a sensible program based on
the eye.
Units which were such a limitation to spacing and design have no meaning
left for those concerned with the proper fit of letters. If the counter
or inside shape of a letter is changed the space outside a letter will
require change to meet the requirement of the eye.
Walter Tracy in his book ``Letters of Credit'' writes: ``It is to be hoped,
though, that before long electronic type-setting systems will be equipped
with a complete modulation program so that all aspects of a type -- its
x-height and stroke weights and its internal as well its external spaces
-- will be automatically adjusted according to type size, so as to maintain
a proper balance of all the elements of design.''
These words show a very deep perception of the way programs should be
developed. However, the task won't be made easy unless the way the eye
perceives letters is investigated far more than it has been so far. For
example, leaving aside the actual type-sizes required, the program must
increase stroke thickness and counter size at the same time as the type-size
is decreased. Though this sounds like a contradiction it is true for the
eye. I know of no program that does this satisfactorily at the moment,
but I do believe it to be possible and it should include spacing; indeed
it should be one and the same thing.
This program would not work in units. The LOGOS system of spacing is based
on calculating the center of each letter by using a fairly conventional
method. However, the true formula is different in that it uses higher
moments. For example, to find the center of gravity weight is multiplied
by distance squared (the power of 2). Spacing on the other hand requires
a higher moment to agree with the center required by the eye.
The centers obtained form the very basis of spacing when joined one to
another. So good are these results that one might reasonably be forgiven
for believing a law is being discovered.
> Today I came across some files I'd kept on this topic.
> A Few Notes on Legibility of Typefaces By Herman Zapf
> We should not wonder that so many people nowadays --
> particularly young people -- don't like to read. The
> presentation of letterforms in printed texts, and even more
> on TV or personal computer screens, ignores completely
> the laws of legibility.
I doubt that good versus bad typography has anything to do with an
alleged decline in reading.
> I also kept some notes by David Kindersley himself.
> Spacing in theory
> The LOGOS system of
> spacing is based on calculating the center of each letter by
> using a fairly conventional method.
> The centers obtained form the very basis of spacing when
> joined one to another.
This very interesting. His theory of spacing is that it is not about
creating even spaces between the edges of the letterforms, but rather
about creating even spaces between the geometric centers of the
letterforms.
This seems to contradict my experience of perceiving poor spacing,
where it is the differences in space between the edges of the letters
that makes spacing look uneven. It also contradicts my understanding
of how the human visual system processes information at the lowest
level. The low level human visual system is designed to compute the
relative position of edges in the visual field. It is not designed to
compute the relative position of the geometric centers of objects in
the visual field. (This of course does occur at later, higher levels
of visual processing.)
On the other hand, it is certainly possible that because of the
shapes of letterforms that have evolved over time to be legible and
pleasing to the eye, even spacing of letters by their geometric
centers happens to lead to evenness of the space between the edges of
letters. Anybody know how the InDesign optical kerning algorithm does
its thing?
I would think this does not mean the geometric centers, but the optical center. This means that an OO would kern tighter than an HH in a sans font because the curved shape has an optical edge that is different from the physical edge. This is why the O does not sit on the baseline (except in really horrid, amateur fonts).
> Re: His theory of spacing is that it is not about
> creating even spaces between the edges of the letterforms, but
> rather about creating even spaces between the geometric centers of
> the letterforms.
>
> I would think this does not mean the geometric centers, but the
> optical center. This means that an OO would kern tighter than an
> HH in a sans font because the curved shape has an optical edge
> that is different from the physical edge.
How would the "optical" center of an "O" or "H" be any different than
the geometric center?
I am extrapolating that feature from the vertical to the horizontal. Thus, two Os would have to be closer together than two Hs to be optically correct.
I hope this explains what I mean. If not, I'll have to break out AI and make up some samples.