Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

poorly formed underbrace (s)

1,293 views
Skip to first unread message

coo...@nospamverizon.net

unread,
Nov 24, 2011, 5:00:55 PM11/24/11
to
For lack of a better description -- basically, I've noticed that the
underbraces I've used in various equations in one of my books look
rather poor ('choppy', 'rough'), when rendered either to .dvi or to .pdf
(using, typically, .dvi -> .ps -> .pdf compilation).

The following minimal example shows what I mean -- at least, it does on
my end (using GS 9.0.4, Adobe Acrobat X, and a 1900 x 1200 res monitor).
If I compile the following, the underbrace looks 'rough' -- as if it has
been ligatured together from 3 discrete pieces (especially obvious if
you blow it up to -- say -- 800% of normal). Since it shows up even when
printing on a high-end printer, this is not merely an obsession with 'on
screen' aesthetics. I found the following, which seems to address the
query -- is the response (as shown in the following) more or less correct?

http://tex.stackexchange.com/questions/12141/underbrace-choppy

\documentclass{minimal}

\usepackage{amsmath}

\begin{document}

Here is an example of a `poorly formed' structure (in this example, and
under-brace):

\begin{equation*}
\underbrace{\hat{S}_i-S_i}_{\epsilon}
\end{equation*}

Here is a left and right brace that look fine:

\begin{equation*}
\frac{dN}{dt}=rN\left\{1-\frac{N}{K}\right\}
\end{equation*}

\end{document}

Donald Arseneau

unread,
Nov 24, 2011, 9:54:24 PM11/24/11
to
coo...@NOSPAMverizon.net writes:

> For lack of a better description -- basically, I've noticed that the
> underbraces I've used in various equations in one of my books look rather
> poor ('choppy', 'rough'), when rendered either to .dvi or to .pdf (using,
> typically, .dvi -> .ps -> .pdf compilation).
>
> The following minimal example shows what I mean -- at least, it does on my
> end (using GS 9.0.4, Adobe Acrobat X, and a 1900 x 1200 res monitor). If I
> compile the following, the underbrace looks 'rough' -- as if it has been
> ligatured together from 3 discrete pieces (especially obvious if you blow it
> up to -- say -- 800% of normal).

Well it *IS* built up from discrete pieces. The bad part is
that it is built from four character glyphs and two rules, and
the algorithms used to render or pixelize characters and rules
are different. Basically, characters can be relied on to always
be adjusted the same way to go onto the pixel grid, but the edges
of rules get rounded in various ways depending on the output
display program. I get different results for xdvi,evince, acroread
and xpdf. At various resolutions you will see that the edges of
the rules move around by 1 pixel: sometimes 1 pixel too high
and sometimes 1 pixel too low, or even leaving 1 pixel gaps.

Your comparison left and right braces are regular characters, not
composites, but the biggest sizes are composites of several characters.
That's right, just characters -- no rules -- so they don't suffer
from "roughness". I don't know why the cmex font contains no
character to use for the straight portion of \underbrace.

Donald Arseneau as...@triumf.ca

coo...@nospamverizon.net

unread,
Nov 25, 2011, 11:36:04 AM11/25/11
to Donald Arseneau
Thanks very much -- it all makes sense now (more or less). It might seem
that the only important point in the end is 'how does it print?', but
more and more of what I do is published to the screen, not to paper. So,
I'm increasingly interested in optimizing for 'screen'. Seems as thought
there are some limits, here -- part owing to how PDF readers work, and
part owing to (La)TeX itself.

Herbert Schulz

unread,
Nov 25, 2011, 1:38:30 PM11/25/11
to
Howdy,

I tried your document sert up for xelatex and the underbrace seems much better.

%%!TEX TS-program = xelatex
%%!TEX encoding = UTF-8 Unicode
\documentclass{article}
\usepackage{amsmath}
\usepackage{fontspec}
\setmainfont{TeX Gyre Pagella}
\usepackage{unicode-math}
\setmathfont{Asana Math}
\begin{document}

Here is an example of a `poorly formed' structure (in this example, and
under-brace):

\begin{equation*}
\underbrace{\hat{S}_i-S_i}_{\epsilon}
\end{equation*}

Here is a left and right brace that look fine:

\begin{equation*}
\frac{dN}{dt}=rN\left\{1-\frac{N}{K}\right\}
\end{equation*}

\end{document}

--
Good Luck,
Herb Schulz

coo...@nospamverizon.net

unread,
Nov 25, 2011, 2:39:36 PM11/25/11
to Herbert Schulz
On 11/25/2011 1:38 PM, Herbert Schulz wrote:
> \documentclass{article}
> \usepackage{amsmath}
> \usepackage{fontspec}
> \setmainfont{TeX Gyre Pagella}
> \usepackage{unicode-math}
> \setmathfont{Asana Math}
> \begin{document}
>
> Here is an example of a `poorly formed' structure (in this example, and
> under-brace):
>
> \begin{equation*}
> \underbrace{\hat{S}_i-S_i}_{\epsilon}
> \end{equation*}
>
> Here is a left and right brace that look fine:
>
> \begin{equation*}
> \frac{dN}{dt}=rN\left\{1-\frac{N}{K}\right\}
> \end{equation*}
>
> \end{document}


Thanks -- I'll give it a try.

coo...@nospamverizon.net

unread,
Nov 25, 2011, 4:51:01 PM11/25/11
to
On 11/25/2011 1:38 PM, Herbert Schulz wrote:
Indeed -- it does 'look better'.

Mico Loretan

unread,
Nov 26, 2011, 12:00:49 PM11/26/11
to
If you can stand (or even like?!) the look of Times New Roman, you
could try using the "mtpro2" package, which produces much nicer
horizontal and vertical braces. Specifically, the package provides the
command `\undercbrace` ("c" for "curly"); the output of this command
looks good both on screen and on paper. Whereas the "full" mtpro2
package is not free-of-charge, the "lite" subset is free, and it's all
you need to produce good-looking curly braces.

\documentclass{minimal}
\usepackage{amsmath,tgtermes}
\usepackage[lite]{mtpro2}
\begin{document}
A `well-formed' underbrace:
$\undercbrace{\hat{S}_i-S_i}_{\epsilon}$.
\end{document}

coo...@nospamverizon.net

unread,
Nov 27, 2011, 1:42:19 PM11/27/11
to
And we're done... ;-)

Actually, TNR isn't completely horrible, so I'll probably have a look
just to satisfy my curiosity.

I have never-ending challenges with fonts. I currently use Palatino
(Math Pazo) because is is the best compromise for my needs. Say, 1000
page book with lots of equations. Book is generally too long for simple
printing, so I also generate a version in landscape orientation, 2 pages
up per page. At such a reduced point size, I need a font with 'meat' --
one that will be legible in that point size. Palatino works pretty well
(and I have full math support), but after 8+ years using it, I'm getting
tired of it.

Herbert Schulz

unread,
Nov 27, 2011, 2:13:17 PM11/27/11
to
In article <jau0ai$61c$1...@ruby.cit.cornell.edu>,
Howdy,

Try

\usepackage{fourier}
\usepackage[scaled=0.85]{berasans}
\usepackage[scaled=0.85]{beramono}

which seems to look better than CM on my system although not as good as
using Asana Math with xelatex.

Good Luck,
Herb Schulz

Dan Luecking

unread,
Nov 28, 2011, 12:38:26 PM11/28/11
to
On Sat, 26 Nov 2011 09:00:49 -0800 (PST), Mico Loretan
<loreta...@gmail.com> wrote:

>On Nov 24, 5:00 pm, cooc...@NOSPAMverizon.net wrote:
[...]
>>
>> \documentclass{minimal}
>>
>> \usepackage{amsmath}
>>
>> \begin{document}
>>
>> Here is an example of a `poorly formed' structure (in this example, and
>> under-brace):
>>
>> \begin{equation*}
>> \underbrace{\hat{S}_i-S_i}_{\epsilon}
>> \end{equation*}
>>
>> Here is a left and right brace that look fine:
>>
>> \begin{equation*}
>> \frac{dN}{dt}=rN\left\{1-\frac{N}{K}\right\}
>> \end{equation*}
>>
>> \end{document}
>
>If you can stand (or even like?!) the look of Times New Roman, you
>could try using the "mtpro2" package, which produces much nicer
>horizontal and vertical braces. Specifically, the package provides the
>command `\undercbrace` ("c" for "curly"); the output of this command
>looks good both on screen and on paper. Whereas the "full" mtpro2
>package is not free-of-charge, the "lite" subset is free, and it's all
>you need to produce good-looking curly braces.
>
>\documentclass{minimal}
>\usepackage{amsmath,tgtermes}

This is TeX-Gyre Termes, not Times New Roman, though it
is supposed to match (some version of) Times.

>\usepackage[lite]{mtpro2}
>\begin{document}
>A `well-formed' underbrace:
>$\undercbrace{\hat{S}_i-S_i}_{\epsilon}$.
>\end{document}

This works fine until the braced material exceeds 295 points in width:
$$
\undercbrace{abcdefghijklmnopqrstuvwxyz -
abcdefghijklmnopqrstuvwxyz}_{\epsilon}
$$
is OK, but
$$
\undercbrace{abcdefghijklmnopqrstuvwxyz -
abcdefghijklmnopqrstuvwxyz +
abcdefghijklmnopqrstuvwxyz}_{\epsilon}
$$
produces an underbrace that is too short.

The code in mtpro2.sty for \undercbrace selects one of a
limited number of underbraces in the mt2ex* font(s); the
largest seems to be about 295 points wide.

The code for the \undercbrace also fails in sub- and superscripts.
Of course, the same is true of the standard \underbrace.


Dan
To reply by email, change LookInSig to luecking

Ilya Zakharevich

unread,
Dec 1, 2011, 6:44:02 AM12/1/11
to
On 2011-11-25, Donald Arseneau <as...@triumf.ca> wrote:
> Well it *IS* built up from discrete pieces. The bad part is
> that it is built from four character glyphs and two rules, and
> the algorithms used to render or pixelize characters and rules
> are different. Basically, characters can be relied on to always
> be adjusted the same way to go onto the pixel grid, but the edges
> of rules get rounded in various ways depending on the output
> display program. I get different results for xdvi,evince, acroread
> and xpdf. At various resolutions you will see that the edges of
> the rules move around by 1 pixel: sometimes 1 pixel too high
> and sometimes 1 pixel too low, or even leaving 1 pixel gaps.

So this is essentially a problem with dvi2WHATEVER converters
(including pdfTeX by extension...), right? [E.g., I do not see any
discrepancy on pmdvi viewer...]

I wonder whether one could circumvent this by having a special like
\special{rulechar: on/off} so that

\special{rulechar: on} \underbrace{...} \special{rulechar: off}

would advice the converter to transform rules to a rescaled char (this
char, of course, must be known to have a rectangular glyph). Then the
underbrace would be composited of completely of characters (as far as
the particular document format is concerned), so won't have these problems?

Puzzled,
Ilya

Donald Arseneau

unread,
Dec 2, 2011, 1:45:04 AM12/2/11
to
Ilya Zakharevich <nospam...@ilyaz.org> writes:

> On 2011-11-25, Donald Arseneau <as...@triumf.ca> wrote:
> > Well it *IS* built up from discrete pieces. The bad part is
> > that it is built from four character glyphs and two rules, and
> > the algorithms used to render or pixelize characters and rules
> > are different. Basically, characters can be relied on to always
> > be adjusted the same way to go onto the pixel grid, but the edges
> > of rules get rounded in various ways depending on the output
> > display program. I get different results for xdvi,evince, acroread
> > and xpdf. At various resolutions you will see that the edges of
> > the rules move around by 1 pixel: sometimes 1 pixel too high
> > and sometimes 1 pixel too low, or even leaving 1 pixel gaps.
>
> So this is essentially a problem with dvi2WHATEVER converters
> (including pdfTeX by extension...), right? [E.g., I do not see any
> discrepancy on pmdvi viewer...]

Sort of, but it is not a "bug" in the sense of a simple mistake
that can be corrected; it is an intrinsic problem with resterizing.
Rendering can choose to "snap" the rule edges to the pixel grid to
give the best agreement for only *two* of these three: The top
edge position, the bottom edge position, or the thickness. There
is little or no context that says which of these is less critical
than the other in each particular rule.

> I wonder whether one could circumvent this by having a special like
> \special{rulechar: on/off} so that
>
> \special{rulechar: on} \underbrace{...} \special{rulechar: off}

No, that doesn't get it. You just need to define \underbrace to
use characters *IF CHARACTERS ARE PROVIDED*, which they are not.
Contents of \special don't affect the definition of \underbrace!

Left and right braces do use characters for the vertical segments
(look at the font map for cmex10). It doesn't look hard to define
an \underbrace that uses \rotatebox and $\right\}$. Hmmm, not
exactly easy because of the spacing/positioning differences...



Donald Arseneau as...@triumf.ca

Donald Arseneau

unread,
Dec 2, 2011, 6:32:43 AM12/2/11
to
Donald Arseneau <as...@triumf.ca> writes:

> Left and right braces do use characters for the vertical segments
> (look at the font map for cmex10). It doesn't look hard to define
> an \underbrace that uses \rotatebox and $\right\}$. Hmmm, not
> exactly easy because of the spacing/positioning differences...

The positioning is tricky because one must keep track of the correct
reference point during rotation. The spacing is different because
the glyphs that make the left and right braces are quite different
from the ones that make the over and under braces. Oddly different.

\font\cmx=cmex10
\def\shoc#1{{#1:~\fboxsep=0pt\fboxrule=0.1pt\fbox{\cmx\char"#1}}}

Show glyph-fragment distinctions with both size and bounding-box:
\shoc{7A} \shoc{7C} \shoc{38} \shoc{3A} \shoc{3C}


% Make \underbrace be a rotated right brace:

\RequirePackage{graphics}
\makeatletter
\def\underbrace#1{\mathop{\hbox{\m@th
\setbox\z@\hbox{$\displaystyle{#1}$}%
\vrule height \ht\z@ width\z@
\nulldelimiterspace\z@
\delimiterfactor\@m \delimitershortfall\z@
\rotatebox{-90}{$\left.\mathstrut
\vcenter{\hbox{\kern-\ht\z@
\rotatebox{90}{\box\z@}}}\mkern-1mu\right\rbrace\mkern-1.3mu$}%
}}\limits
}




Donald Arseneau as...@triumf.ca

Ilya Zakharevich

unread,
Dec 2, 2011, 6:34:24 AM12/2/11
to
On 2011-12-02, Donald Arseneau <as...@triumf.ca> wrote:
> Ilya Zakharevich <nospam...@ilyaz.org> writes:

>> I wonder whether one could circumvent this by having a special like
>> \special{rulechar: on/off} so that
>>
>> \special{rulechar: on} \underbrace{...} \special{rulechar: off}

> No, that doesn't get it. You just need to define \underbrace to
> use characters *IF CHARACTERS ARE PROVIDED*, which they are not.

So redefining \underbrace is not a solution - understood. THIS is why
I use \special's instead, to do "redefinitions" in the dvi2WHATEVER
converter. Presumably, the converter would know at least ONE
character with a rectangular glyph; then *the converter* would be able
to replace any rule by this character (appropriately rescaled).

The \special's purpose is to advice the converter when to do such an
ugly hack...

> Contents of \special don't affect the definition of \underbrace!

But they may affect how the dvi2WHATEVER converter would interprete a
"rule" directive, right?

Am I more clear now?
Ilya

Donald Arseneau

unread,
Dec 2, 2011, 5:37:07 PM12/2/11
to
Ilya Zakharevich <nospam...@ilyaz.org> writes:

> On 2011-12-02, Donald Arseneau <as...@triumf.ca> wrote:
> > Ilya Zakharevich <nospam...@ilyaz.org> writes:
>
> >> I wonder whether one could circumvent this by having a special like
> >> \special{rulechar: on/off} so that
> >>
> >> \special{rulechar: on} \underbrace{...} \special{rulechar: off}
>
> > No, that doesn't get it. You just need to define \underbrace to
> > use characters *IF CHARACTERS ARE PROVIDED*, which they are not.
>
> So redefining \underbrace is not a solution - understood. THIS is why

Did you see my redefinition?

> I use \special's instead, to do "redefinitions" in the dvi2WHATEVER
> converter. Presumably, the converter would know at least ONE
> character with a rectangular glyph; then *the converter* would be able
> to replace any rule by this character (appropriately rescaled).

That doesn't get you anywhere. You need matched sets of characters
for seamless assembly, and some random rectangular character
"(appropriately rescaled)" can't work any better than a simple rule.

> > Contents of \special don't affect the definition of \underbrace!
>
> But they may affect how the dvi2WHATEVER converter would interprete a
> "rule" directive, right?

Most "whatever" is not doing the actual rendering to pixels, but
rather the later viewing program or printer, so the imaginary
\special would have to affect that process. It would have to
tell the rendering process (all of them, which you don't control)
to look at the rendering of the adjacent characters and ensure
that the edges of the rule were on the same pixels as the edges
of the "ink" within the characters. That would be a tour-de-force
of image processing.

> Am I more clear now?

Still barking up the wrong tree.



Donald Arseneau as...@triumf.ca

Ilya Zakharevich

unread,
Dec 3, 2011, 7:08:32 PM12/3/11
to
On 2011-12-02, Donald Arseneau <as...@triumf.ca> wrote:
>> > No, that doesn't get it. You just need to define \underbrace to
>> > use characters *IF CHARACTERS ARE PROVIDED*, which they are not.

>> So redefining \underbrace is not a solution - understood. THIS is why

> Did you see my redefinition?

One which does not work with pure-DVI viewers? Yes.

>> I use \special's instead, to do "redefinitions" in the dvi2WHATEVER
>> converter. Presumably, the converter would know at least ONE
>> character with a rectangular glyph; then *the converter* would be able
>> to replace any rule by this character (appropriately rescaled).
>
> That doesn't get you anywhere. You need matched sets of characters
> for seamless assembly, and some random rectangular character
> "(appropriately rescaled)" can't work any better than a simple rule.

???

According to YOUR analysis, all that is missing to make a perfect
underbrace is a rectangular character of the height \ht of cmex10's
"78. It is mocked up by a rule in the current \underbrace code, and
it is THIS mocking which causes the problems.

All that I propose is to make dvi2WHATEVER to recognize the mocking,
and make it back into a char.

>> > Contents of \special don't affect the definition of \underbrace!
>>
>> But they may affect how the dvi2WHATEVER converter would interprete a
>> "rule" directive, right?
>
> Most "whatever" is not doing the actual rendering to pixels, but
> rather the later viewing program or printer, so the imaginary
> \special would have to affect that process. It would have to
> tell the rendering process (all of them, which you don't control)
> to look at the rendering of the adjacent characters and ensure
> that the edges of the rule were on the same pixels as the edges
> of the "ink" within the characters. That would be a tour-de-force
> of image processing.

Maybe. But this is not what I proposed.

All that these specials should do is to convert any rule in between
them to an appropriately scaled rectangular char.

>> Am I more clear now?
>
> Still barking up the wrong tree.

Am I more clear now?

Ilya

Donald Arseneau

unread,
Dec 6, 2011, 10:14:50 AM12/6/11
to
Ilya Zakharevich <nospam...@ilyaz.org> writes:

> On 2011-12-02, Donald Arseneau <as...@triumf.ca> wrote:
> >> > No, that doesn't get it. You just need to define \underbrace to
> >> > use characters *IF CHARACTERS ARE PROVIDED*, which they are not.
>
> >> So redefining \underbrace is not a solution - understood. THIS is why
>
> > Did you see my redefinition?
>
> One which does not work with pure-DVI viewers? Yes.
>
> >> I use \special's instead, to do "redefinitions" in the dvi2WHATEVER
> >> converter. Presumably, the converter would know at least ONE
> >> character with a rectangular glyph; then *the converter* would be able
> >> to replace any rule by this character (appropriately rescaled).
> >
> > That doesn't get you anywhere. You need matched sets of characters
> > for seamless assembly, and some random rectangular character
> > "(appropriately rescaled)" can't work any better than a simple rule.
>
> ???
>
> According to YOUR analysis, all that is missing to make a perfect
> underbrace is a rectangular character of the height \ht of cmex10's
> "78.

No, it is only according to your understanding of it. ("78?? Do you
mean "7A?). Seamless matching characters must have the same "ink"
position relative to the reference point.

> All that I propose is to make dvi2WHATEVER to recognize the mocking,
> and make it back into a char.

Please do so, since that is "all" there is to it.

> >> > Contents of \special don't affect the definition of \underbrace!
> >>
> >> But they may affect how the dvi2WHATEVER converter would interprete a
> >> "rule" directive, right?
> >
> > Most "whatever" is not doing the actual rendering to pixels, but
> > rather the later viewing program or printer, so the imaginary
> > \special would have to affect that process. It would have to
> > tell the rendering process (all of them, which you don't control)
> > to look at the rendering of the adjacent characters and ensure
> > that the edges of the rule were on the same pixels as the edges
> > of the "ink" within the characters. That would be a tour-de-force
> > of image processing.
>
> Maybe. But this is not what I proposed.

But it is what's necessary, in general. Look at the characters for the
vertical braces with all that white space. You can't just pull out
or generate a rectangular character that is the same size without
looking at the "ink" within the character.




Donald Arseneau as...@triumf.ca

Ilya Zakharevich

unread,
Dec 7, 2011, 5:23:16 PM12/7/11
to
On 2011-12-06, Donald Arseneau <as...@triumf.ca> wrote:
>> According to YOUR analysis, all that is missing to make a perfect
>> underbrace is a rectangular character of the height \ht of cmex10's
>> "78.
>
> No, it is only according to your understanding of it. ("78?? Do you
> mean "7A?). Seamless matching characters must have the same "ink"
> position relative to the reference point.

[Double-checking... Yes, it is "7A/"7C. Sorry!]

I changed your code to

\def\shoc#1{{#1:~\hbox{\vrule height 0.2pt width 3pt}%
\fboxsep=0pt\fboxrule=0.1pt%
\fbox{\textcolor[named]{Red}{\cmx\char"#1}}}}

and it looks like the baseline of "7A/"7C is at the lower edge of the
bounding box. So your restriction is more or less trivial: the
baseline of this rectangular char should be on the lower edge of the
bbox... Do I miss anything?

>> All that I propose is to make dvi2WHATEVER to recognize the mocking,
>> and make it back into a char.

> Please do so, since that is "all" there is to it.

So you think my analysis is correct, and such an approach would also
produce a perfect underbrace with dvi2vectorformats convertors?

> But it is what's necessary, in general. Look at the characters for the
> vertical braces with all that white space. You can't just pull out
> or generate a rectangular character that is the same size without
> looking at the "ink" within the character.

But vertical braces are converted to PDF nicely, are not they? (I
think I saw problems with AcroRead 3, but not with AcroRead 4.) As
you said, it is only underbraces which cause problems.

Ilya

Robin Fairbairns

unread,
Dec 11, 2011, 1:51:21 PM12/11/11
to
Ilya Zakharevich <nospam...@ilyaz.org> writes:

> On 2011-12-06, Donald Arseneau <as...@triumf.ca> wrote:
>>> According to YOUR analysis, all that is missing to make a perfect
>>> underbrace is a rectangular character of the height \ht of cmex10's
>>> "78.
>>
>> No, it is only according to your understanding of it. ("78?? Do you
>> mean "7A?). Seamless matching characters must have the same "ink"
>> position relative to the reference point.
>
> [Double-checking... Yes, it is "7A/"7C. Sorry!]
>
> I changed your code to
>
> \def\shoc#1{{#1:~\hbox{\vrule height 0.2pt width 3pt}%
> \fboxsep=0pt\fboxrule=0.1pt%
> \fbox{\textcolor[named]{Red}{\cmx\char"#1}}}}
>
> and it looks like the baseline of "7A/"7C is at the lower edge of the
> bounding box. So your restriction is more or less trivial: the
> baseline of this rectangular char should be on the lower edge of the
> bbox... Do I miss anything?
>
>>> All that I propose is to make dvi2WHATEVER to recognize the mocking,
>>> and make it back into a char.
>
>> Please do so, since that is "all" there is to it.
>
> So you think my analysis is correct, and such an approach would also
> produce a perfect underbrace with dvi2vectorformats convertors?

i think you are missing the emphasis: donald is suggesting that _you_
patch the appropriate dvi processors.

i once did some work on a dvi processor (back in the 80s). a laugh a
minute, i can assure you: i shall be interested to learn how you get on.
--
Robin Fairbairns, Cambridge
my address is @cl.cam.ac.uk, regardless of the header. sorry about that.

Ilya Zakharevich

unread,
Dec 11, 2011, 10:33:05 PM12/11/11
to
On 2011-12-11, Robin Fairbairns <rf...@cl.cam.ac.uk> wrote:
>> So you think my analysis is correct, and such an approach would also
>> produce a perfect underbrace with dvi2vectorformats convertors?

> i think you are missing the emphasis: donald is suggesting that _you_
> patch the appropriate dvi processors.

Why do you think so?

[I do not see why fixing the dvi processor is productive when I do
not know how vector graphic engines are designed to render adjacent
characters gaplessly. I can see that "a gapless rendering of
glyphs taken from the same font with the same scaling and/or
rendering matrix" [*] may be an important design goal. Given that
this is properly implemented in the renderer, it IS important to
have the typesetting programs placing parts of "compound
characters" correctly.

[*] I mean: two glyphs are rendered exactly the same as a glyph
whose ink is the union of these glyph's inks. It is easy to
imagine how the antialiasing logic and/or "snapping to
grid" may make this assumption false.

However, for my suggestion to work, the same programming logic
should be applicable to gaplessly placed glyphs from different
fonts with different scaling matrices. (And we KNOW that it does
not work for gaplessly placed glyph and a rectangular blob of ink.)
I would appreciate if people share their expectations about
different renderers...

> i once did some work on a dvi processor (back in the 80s). a laugh a
> minute, i can assure you: i shall be interested to learn how you get on.

Two most important processors are AFAIU dvips and pdfTeX. I think I
glimpsed through the code of dvips about 15 years ago, and do not
remember any gremlins. Do you expect some? Should I brace for pdfTeX
being coded by script kiddies, or what?

Thanks,
Ilya

Ilya Zakharevich

unread,
Dec 13, 2011, 6:44:28 PM12/13/11
to
On 2011-12-01, Ilya Zakharevich <nospam...@ilyaz.org> wrote:
> So this is essentially a problem with dvi2WHATEVER converters
> (including pdfTeX by extension...), right? [E.g., I do not see any
> discrepancy on pmdvi viewer...]
>
> I wonder whether one could circumvent this by having a special like
> \special{rulechar: on/off} so that
>
> \special{rulechar: on} \underbrace{...} \special{rulechar: off}
>
> would advice the converter to transform rules to a rescaled char (this
> char, of course, must be known to have a rectangular glyph). Then the
> underbrace would be composited of completely of characters (as far as
> the particular document format is concerned), so won't have these problems?

Here is an attempt to implement a poor-man part of it on the level on LaTeX.
Add this to preample inside \makeatFOO, and \usepackage{amsmath}.

\DeclareMathSymbol\myBsquare {\mathord}{AMSa}{"04}
\def\myBsquareSTRIP{$\m@th\mkern-1mu\myBsquare\mkern-1mu$}

\def\upbracefill{$\m@th \setbox\z@\hbox{$\braceld$}%
\setbox\tw@\hbox{\resizebox{2pt}{\ht\z@}{\myBsquareSTRIP}}%
\setbox\z@\hbox{\resizebox{1pt}{\ht\z@}{\myBsquareSTRIP}}%
\bracelu\rlap{\copy\z@}\cleaders\copy\tw@\hfill\llap{\copy\z@}\bracerd
\braceld\leaders\vrule \@height\ht\z@ \@depth\z@\hfill\braceru$}

This would draw the left half of underbrace using characters, and the
right half using rules. Observations:

a) AcroRead v4 does not have problems with the LaTeX's version
(without type 1 fonts) - if one forgets about raster font
problems with v4. With this version, 1pixel gaps appear in some
magnifications. Surprisingly, they do not disappear if I change
\mkern to be -1.01mu, only to -1.1mu. So it is not a problem
with rounding?!

b) Can't check other viewers right now.

c) I used a very poor substitute for a "rectangular character
filling its bbox": \blacksquare has rounded corners, and 1mu
whitespace about it. We correct for whitespace, but not for
rounded corners.

What are your experiences with that with your viewers?

Ilya

Michael Shell

unread,
Dec 13, 2011, 10:02:12 PM12/13/11
to
On Tue, 13 Dec 2011 23:44:28 +0000 (UTC)
Ilya Zakharevich <nospam...@ilyaz.org> wrote:


> a) AcroRead v4 does not have problems with the LaTeX's version
> (without type 1 fonts) - if one forgets about raster font
> problems with v4. With this version, 1pixel gaps appear in some
> magnifications. Surprisingly, they do not disappear if I change
> \mkern to be -1.01mu, only to -1.1mu. So it is not a problem
> with rounding?!


Does the PDF 72dpi coordinate system come to play in this problem?

It is my understanding that the "defining points" of a PDF vector object
must be on a 72dpi PDF grid (e.g., the center and radius of a circle)
even though other parts of that object will be accurately rendered at
whatever the current screen or print resolution is being used and thus
need not fall exactly on PDF grid points (but rather the grid of the
higher resolution rasterizer when rendering to screen or print).

A potential trouble spot occurs when two different objects must have
their "defining points" "snapped" to that PDF grid, but still must
overlap as needed, because rounding the two objects' defining points
to the PDF grid may require small changes in the object definition
to maintain the desired overlap in spite of "grid snapping" which can
bump an object up to 0.5 * 1/72 inches horizontally as well as vertically.


Cheers,

Mike Shell

Robin Fairbairns

unread,
Dec 14, 2011, 2:57:09 AM12/14/11
to
Michael Shell <ne...@michaelshell.org> writes:

> On Tue, 13 Dec 2011 23:44:28 +0000 (UTC)
> Ilya Zakharevich <nospam...@ilyaz.org> wrote:
>
>
>> a) AcroRead v4 does not have problems with the LaTeX's version
>> (without type 1 fonts) - if one forgets about raster font
>> problems with v4. With this version, 1pixel gaps appear in some
>> magnifications. Surprisingly, they do not disappear if I change
>> \mkern to be -1.01mu, only to -1.1mu. So it is not a problem
>> with rounding?!

never mind acroread 4 ... what's performance like on a remotely current
acrobat reader (adobe's or otherwise)?

> Does the PDF 72dpi coordinate system come to play in this problem?
>
> It is my understanding that the "defining points" of a PDF vector object
> must be on a 72dpi PDF grid (e.g., the center and radius of a circle)
> even though other parts of that object will be accurately rendered at
> whatever the current screen or print resolution is being used and thus
> need not fall exactly on PDF grid points (but rather the grid of the
> higher resolution rasterizer when rendering to screen or print).

really? there is a problem with dvips snapping to grid (which is why
-Ppdf used to set resolution 8000dpi), but there would have been little
point in bothering if there was a pdf limitation also getting in the
way.

> A potential trouble spot occurs when two different objects must have
> their "defining points" "snapped" to that PDF grid, but still must
> overlap as needed, because rounding the two objects' defining points
> to the PDF grid may require small changes in the object definition
> to maintain the desired overlap in spite of "grid snapping" which can
> bump an object up to 0.5 * 1/72 inches horizontally as well as vertically.

as i say, i don't think this is the case. if it were, we would surely
see problems like the underbrace one dotted through every line of text:
there are regular adjustments inside dvips-generated generated strings
of text: i don't know about pdftex's output since i don't read pdf...

Michael Shell

unread,
Dec 15, 2011, 5:59:18 AM12/15/11
to
On Wed, 14 Dec 2011 07:57:09 +0000
Robin Fairbairns <rf...@cl.cam.ac.uk> wrote:


> really? there is a problem with dvips snapping to grid (which is why
> -Ppdf used to set resolution 8000dpi), but there would have been little
> point in bothering if there was a pdf limitation also getting in the
> way.


I wanted to better understand the problem because I really don't know
anything about how TeX "joins" such objects. Before we consider what
happens with dvips, I wanted to know if there is any related issue here
with respect to direct PDF output under pdftex. If we are joining objects
whose intersect points (i.e. "critical area") we can perfectly specify
under the coordinate system in question (e.g., 72dpi for PDF), then I
would not expect there to be a problem (and if there is any "onscreen"
issue in that case, I would think that the problem lies with the
screen rendering algorithm).

On the other hand, if we cannot precisely specify these intersect points
in the same point coordinate system used to define the objects (such as,
say, "the end of a line is at the point that is 30 degrees up a circle")
and have to make some assumptions about the final rendering resolution
to subtly alter the objects to ensure overlap at the intersect area,
then I would expect there to be a constant threat of problems like this
depending on the coordinate system being used (and for on-screen display,
the screen resolution and magnification).


Anyway, from a more practical standpoint, I ran some *printing* tests of
my own on what I think may be the same type of problem in a different form:


\documentclass{minimal}

\begin{document}

\begin{displaymath}
\sqrt{\phantom{\rule{72pt}{48pt}}}
\end{displaymath}

\end{document}


Which involves a simple square root symbol rather than an underbrace.
The reason I got interested in this example is because the original
poster of this thread (who did send me some test files to look at, but
I think he is busy with other things now) noticed this square root
issue in addition to his underbrace woes.

The anomaly (which, admittedly, even if present will be somewhat hard to
see) is with respect to the connection of the long horizontal line of
the square root to its left part.

My printing tests were to my Postscript 600dpi HP Laserjet 4si MX.

Pdflatex did very good.

"Standard/default" dvips settings (using -D 600) did have a faint anomaly
when printing from acroread 7.0 and xpdf.

Enabling -D 7200 did not have much effect (and in fact it actually
makes it a tad worse under the older acroreads).

However, enabling Timothy Van Zandt's alt-rule.pro dvips drawing
algorithm (see the docs of my testflow package for details) did seem
to fix the problem (in print that is), even for -D 7200.

My conclusion so far is that the use of Timothy Van Zandt's
alt-rule.pro is the single most important factor in dvips' alignment
quality when printing.

I suspect that the system of the original poster has another contributing
problem. Namely, PCL conversion which will happen under printer drivers
that output to a PCL printer. If a PS file is converted to a PCL of
lower/different resolution than that of the target printer:

gs -q -dSAFER -dNOPAUSE -dBATCH -sDEVICE=ljet4d -r300 -sOutputFile=- file.ps > file_r300.pcl

the problem shows up, but is fine when using say, -r600 for a 600 dpi
printer.

So, my overall practical recommendation at this point is to use
Van Zandt's alt-rule.pro under dvips and verify that any PCL print
driver conversions are correct for the printer at hand.


For *onscreen* display, on my system this example shows an artifact even
for pdflatex produced files (Acrobat Reader 7.0 and xpdf 3.02 on linux).
However, because the artifact is "clearly not there" at higher
magnifications, I consider it to be a problem with the onscreen rendering
algorithms rather than with pdflatex's output.




Cheers,

Mike Shell





Donald Arseneau

unread,
Dec 16, 2011, 4:17:12 PM12/16/11
to
Ilya Zakharevich <nospam...@ilyaz.org> writes:

> and it looks like the baseline of "7A/"7C is at the lower edge of the
> bounding box. So your restriction is more or less trivial: the
> baseline of this rectangular char should be on the lower edge of the
> bbox... Do I miss anything?
>
> >> All that I propose is to make dvi2WHATEVER to recognize the mocking,
> >> and make it back into a char.
>
> > Please do so, since that is "all" there is to it.
>
> So you think my analysis is correct, and such an approach would also
> produce a perfect underbrace with dvi2vectorformats convertors?

I can't really judge how rendering engines (xdvi, ps printer, ghostscript,
various pdf viewers) handle the pixel alignment for scaled characters,
whether it is an improvement over how they handle it for rules. Maybe
it will help. I ran your trial using \scalebox and did still see
irregularities. That's a disappointment.

What I can say is that the \special idea still seems misdirected. If a
backend can't understand rotation specials, what are the chances it will
inderstand the (still hypothetical) convert-rule-to-character special?
If it works, then it is only applicable to underbrace and overbrace,
where the ink fills the bounding box (and ink for parts of the brace
hang outside the bounding box). It seems more profitable to add one
new character to the font, and eliminate any need for kludges. It would
seem more practical to distribute a font with a document rather than
programs to view the document.


Donald Arseneau as...@triumf.ca

Ilya Zakharevich

unread,
Dec 25, 2011, 11:41:07 PM12/25/11
to
On 2011-12-14, Robin Fairbairns <rf...@cl.cam.ac.uk> wrote:
>>> a) AcroRead v4 does not have problems with the LaTeX's version
>>> (without type 1 fonts) - if one forgets about raster font
>>> problems with v4. With this version, 1pixel gaps appear in some
>>> magnifications. Surprisingly, they do not disappear if I change
>>> \mkern to be -1.01mu, only to -1.1mu. So it is not a problem
>>> with rounding?!
>
> never mind acroread 4 ... what's performance like on a remotely current
> acrobat reader (adobe's or otherwise)?

If you have "a remotely current acrobat reader", you have the LaTeX
code to check. I do not have anything newer than v4 here.

>> Does the PDF 72dpi coordinate system come to play in this problem?

I would believe that this is bogus. Another point in favor of
bogosity: GS's ps2pdf driver uses 720ppi coordinate system in the
"hardware" device (recall that the PDF-output engine is "hardware" from
the point of view of GS).

Ilya

Ilya Zakharevich

unread,
Dec 28, 2011, 5:54:37 PM12/28/11
to
On 2011-12-16, Donald Arseneau <as...@triumf.ca> wrote:
> I can't really judge how rendering engines (xdvi, ps printer, ghostscript,
> various pdf viewers) handle the pixel alignment for scaled characters,
> whether it is an improvement over how they handle it for rules. Maybe
> it will help. I ran your trial using \scalebox and did still see
> irregularities. That's a disappointment.
>
> What I can say is that the \special idea still seems misdirected. If a
> backend can't understand rotation specials, what are the chances it will
> inderstand the (still hypothetical) convert-rule-to-character special?

I see - the core of my idea turns out to be too obscure. Think what
happens if the renderer DOES NOT understand the special - it just
renders the rule with a chance that it is misaligned by 1px. But the
picture IS there!

So "old" renderers present "loosyish" images, and "new" ones perfect
ones. Vs the rotation scheme: "old" renderers present something
uncomprehencible [*], and new ones a perfect picture.

[*] I do not why - I would expect that only the rotated underbrace
would be misrepresented. But it turns out that (at least in
dviPM) the underbraced part is ALSO moved far away.

> If it works, then it is only applicable to underbrace and overbrace,

... what makes mixes of glyphs and rules? I also know about radicals...

> where the ink fills the bounding box (and ink for parts of the brace
> hang outside the bounding box).

I thought that all that we are trying to massage are rules - which by
definition fill the bounding box.

> It seems more profitable to add one
> new character to the font, and eliminate any need for kludges. It would
> seem more practical to distribute a font with a document rather than
> programs to view the document.

... And we are back to my question: are "compound chars" rendered
nicely when they are taken from different fonts (but with the same
magnification).

Ilya
0 new messages