Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to create a PDF-Printer from the command line

383 views
Skip to first unread message

Jason

unread,
Jan 5, 2018, 10:20:05 PM1/5/18
to
On a RasperryPi with Raspbian, I would like to create a PDF Printer to
print files to. I only know how to do this with the GUI program
system-config-printer but I don't want to install that on this
Pi. What shell command do I need to create a PDF printer on the Pi (or
on any Debian, for that matter)?

Thanks!
--
Jason

Gene Heskett

unread,
Jan 5, 2018, 11:50:04 PM1/5/18
to
This is all handled by cups, and on Jessie on a pi, or stretch on a
rock64, it just works. But since you need cups anyway, why not enable
browsing in cups on the pi, and "share" any printers you may have hooked
up to one of the other machines on your home network. Then just send the
print job to the printer(s) that will show up when you click on print in
any of the editors, includeing the gfx editors like gimp or inkscape?

I can be working on a piece of g-code to carve metal, using geany, need a
paper copy for reference, its only 20 pages, and its waiting for me on
the output tray of a b&w laser here in the den by the time I arrive in
here from the garage. I needed the exercise anyway. :)

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>

john doe

unread,
Jan 6, 2018, 12:00:04 AM1/6/18
to
Why do you want to"print" if you can convert to pdf using the command line?
Based on the original file extension you simply search for a utility
that will convert your original file to pdf.

--
John Doe

Brian

unread,
Jan 6, 2018, 10:30:05 AM1/6/18
to
lpadmin. The wiki should help.

--
Brian.

Brian

unread,
Jan 6, 2018, 3:20:04 PM1/6/18
to
How does one convert a text file to a PDF using the command line?

--
Brian

john doe

unread,
Jan 6, 2018, 4:00:04 PM1/6/18
to

Curt

unread,
Jan 6, 2018, 4:10:04 PM1/6/18
to
unoconv -f pdf text.txt
cupsfilter text.txt > text.pdf
enscript text.txt -o - | ps2pdf - text.pdf

The above works here.

--
"An autobiography is only to be trusted when it reveals something disgraceful.
A man who gives a good account of himself is probably lying, since any life
when viewed from the inside is simply a series of defeats."
— George Orwell

Brian

unread,
Jan 6, 2018, 7:20:05 PM1/6/18
to
On Sat 06 Jan 2018 at 21:02:15 +0000, Curt wrote:

> On 2018-01-06, Brian <ad...@cityscape.co.uk> wrote:
> > On Sat 06 Jan 2018 at 05:54:00 +0100, john doe wrote:
> >
> >> On 1/6/2018 4:06 AM, Jason wrote:
> >> > On a RasperryPi with Raspbian, I would like to create a PDF Printer to
> >> > print files to. I only know how to do this with the GUI program
> >> > system-config-printer but I don't want to install that on this
> >> > Pi. What shell command do I need to create a PDF printer on the Pi (or
> >> > on any Debian, for that matter)?
> >> >
> >>
> >> Why do you want to"print" if you can convert to pdf using the command line?
> >> Based on the original file extension you simply search for a utility that
> >> will convert your original file to pdf.
> >
> > How does one convert a text file to a PDF using the command line?
> >
>
> unoconv -f pdf text.txt

50+ megabytes of the libreoffice stack to install, But yes, that will
do it. A sledgehammer to crack a nut.

> cupsfilter text.txt > text.pdf

I think you mean /usr/sbin/cupsfilter, unless you are assuming a user
has to be root. I like this one; unlike unoconv, it will probably be
on the system already and the command is flexible. Unfortunately, the
output doesn't have searchable or extractable text, desirable features
in a PDF. You pays your money and ....... .

> enscript text.txt -o - | ps2pdf - text.pdf

If UTF-8 doesn't matter to you, why indeed not use enscript?

> The above works here.

They all work here, too. As does printer-driver-cups-pdf, the OP's
target software.

--
Brian.

Erik Christiansen

unread,
Jan 6, 2018, 9:30:04 PM1/6/18
to
On 07.01.18 00:19, Brian wrote:
> On Sat 06 Jan 2018 at 21:02:15 +0000, Curt wrote:
> > On 2018-01-06, Brian <ad...@cityscape.co.uk> wrote:
> > unoconv -f pdf text.txt
>
> 50+ megabytes of the libreoffice stack to install, But yes, that will
> do it. A sledgehammer to crack a nut.

This may be more delicate?: https://www.gnu.org/software/a2ps/

Erik

Erik Christiansen

unread,
Jan 6, 2018, 9:50:03 PM1/6/18
to
On 07.01.18 13:26, Erik Christiansen wrote:
>
> This may be more delicate?: https://www.gnu.org/software/a2ps/

Hmmm ... and if ps2pdf isn't yet installed at your end, then an apt-get
fixes that. It produces sterling pdf from ps for me - big prints come
out perfectly at the local printer.

Erik

Jason

unread,
Jan 6, 2018, 9:50:03 PM1/6/18
to
I am grateful for all the suggestions. The reason I would like to
setup a PDF printer is so that I can use the lp command to print most
of the common file formats that might come as email attachments to the
PDF printer to have them all end up in the ~/PDF folder. For example:

lp -d PDF-Printer image.jpg

This way if there is no physical printer accessible from the Pi, one
can just print to the PDF printer and print a hard copy later from the
generated PDFs.
I had looked into lpadmin and thought that might be what I need but
couldn't find in the man page how to add a printer. I don't have web
access so am asking here rather than looking on the wiki.

So basically what I'm asking is how to add a printer (this could apply
to any printer, not just PDF) without needing to install a printer
configuration GUI.

Thanks.
--
Jason

Brian

unread,
Jan 7, 2018, 5:20:05 AM1/7/18
to
On Sat 06 Jan 2018 at 20:45:01 -0600, Jason wrote:

> > lpadmin. The wiki should help.
>
> I had looked into lpadmin and thought that might be what I need but
> couldn't find in the man page how to add a printer. I don't have web
> access so am asking here rather than looking on the wiki.

For a PDF printer: install (or reinstall) printer-driver-cups-pdf.
lpadmin automatically sets up a print queue.

> So basically what I'm asking is how to add a printer (this could apply
> to any printer, not just PDF) without needing to install a printer
> configuration GUI.

In general: lpadmin -p queue_name -v device_uri -E -m PPD.

Obtain device_uri from 'lpinfo -v' and PPD from 'lpinfo -m'.

--
Brian.

Curt

unread,
Jan 7, 2018, 6:10:05 AM1/7/18
to
On 2018-01-07, Brian <ad...@cityscape.co.uk> wrote:
>> >
>> > How does one convert a text file to a PDF using the command line?
>> >
>>
>> unoconv -f pdf text.txt
>
> 50+ megabytes of the libreoffice stack to install, But yes, that will
> do it. A sledgehammer to crack a nut.

Depends on the nut, doesn't it?

Anyhoo, I don't understand where you get the 50+ megabytes. I see two
dependencies in stable (python3 and python3-uno), a package size of 48.8
kB, and an installed size of 327.0 kB. So I'm understanding the package
does not depend upon the installation of LibreOffice proper (the
redoubtable "stack"?).

Perhaps my comprehension is faulty.

Brian

unread,
Jan 7, 2018, 7:40:06 AM1/7/18
to
On Sun 07 Jan 2018 at 11:06:01 +0000, Curt wrote:

> On 2018-01-07, Brian <ad...@cityscape.co.uk> wrote:
> >> >
> >> > How does one convert a text file to a PDF using the command line?
> >> >
> >>
> >> unoconv -f pdf text.txt
> >
> > 50+ megabytes of the libreoffice stack to install, But yes, that will
> > do it. A sledgehammer to crack a nut.
>
> Depends on the nut, doesn't it?
>
> Anyhoo, I don't understand where you get the 50+ megabytes. I see two
> dependencies in stable (python3 and python3-uno), a package size of 48.8

Look at the dependencies of python3-uno and then at those of
libreoffice-core.

> kB, and an installed size of 327.0 kB. So I'm understanding the package
> does not depend upon the installation of LibreOffice proper (the
> redoubtable "stack"?).

Indeed not. Those packages are recommended only. (But many people wisely
stick with the default of installing Recommends:).

> Perhaps my comprehension is faulty.

"Room for improvement" is how I would put it. :)


root@desktop3:~# apt-get install unoconv
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
fonts-opensymbol libboost-date-time1.62.0 libboost-filesystem1.62.0 libboost-iostreams1.62.0 libboost-system1.62.0
libclucene-contribs1v5 libclucene-core1v5 libcmis-0.5-5v5 libeot0 libexttextcat-2.0-0 libexttextcat-data libgltf-0. 1-1 libgpgmepp6 libhyphen0 liblangtag-common liblangtag1 libmhash2 libmythes-1.2-0 libneon27-gnutls libodfgen-0.1-1
liborcus-0.12-0 libpython3.6 libraptor2-0 librasqal3 librdf0 libreoffice-common libreoffice-core libreoffice-style- galaxy libreoffice-style-tango librevenge-0.0-0 libxmlsec1 libxmlsec1-nss libyajl2 python3-uno uno-libs3 ure
Suggested packages:
raptor2-utils rasqal-utils librdf-storage-postgresql librdf-storage-mysql librdf-storage-sqlite librdf-storage-virt uoso redland-utils tango-icon-theme java5-runtime
Recommended packages:
libreoffice-writer libreoffice-draw libreoffice-calc libreoffice-impress
The following NEW packages will be installed:
fonts-opensymbol libboost-date-time1.62.0 libboost-filesystem1.62.0 libboost-iostreams1.62.0 libboost-system1.62.0
libclucene-contribs1v5 libclucene-core1v5 libcmis-0.5-5v5 libeot0 libexttextcat-2.0-0 libexttextcat-data libgltf-0. 1-1 libgpgmepp6 libhyphen0 liblangtag-common liblangtag1 libmhash2 libmythes-1.2-0 libneon27-gnutls libodfgen-0.1-1
liborcus-0.12-0 libpython3.6 libraptor2-0 librasqal3 librdf0 libreoffice-common libreoffice-core libreoffice-style- galaxy libreoffice-style-tango librevenge-0.0-0 libxmlsec1 libxmlsec1-nss libyajl2 python3-uno uno-libs3 unoconv ur e
0 upgraded, 37 newly installed, 0 to remove and 0 not upgraded.
Need to get 67.0 MB of archives.
After this operation, 240 MB of additional disk space will be used.
Do you want to continue? [Y/n]


Note that recommended packages are not being installed. Adding them
with "--install-recommends" raises the bar to 207 MB. My estimate was
conservative.

OTOH, cupsfilter is already on the system. It handles text, image and
Postscript files and, run with "-m application/vnd.cups-pdf", produces
a PDF which is autorotated to make it suitable for normal printing.
Lack of searchable text in a PDF is of no consequence for printing.

--
Brian.

rhkr...@gmail.com

unread,
Jan 7, 2018, 7:50:05 AM1/7/18
to
I know this is not directly on point to the OP's question as subsequently
clarified, but I would point out that a variety of programs like txt2pdf exist
(and work)--I assume, but don't know that they are available in the various
Debian distros.

David Wright

unread,
Jan 7, 2018, 11:10:04 AM1/7/18
to
On Sun 07 Jan 2018 at 11:06:01 (+0000), Curt wrote:
> On 2018-01-07, Brian <ad...@cityscape.co.uk> wrote:
> >> >
> >> > How does one convert a text file to a PDF using the command line?
> >> >
> >>
> >> unoconv -f pdf text.txt
> >
> > 50+ megabytes of the libreoffice stack to install, But yes, that will
> > do it. A sledgehammer to crack a nut.
>
> Depends on the nut, doesn't it?
>
> Anyhoo, I don't understand where you get the 50+ megabytes. I see two
> dependencies in stable (python3 and python3-uno), a package size of 48.8
> kB, and an installed size of 327.0 kB. So I'm understanding the package
> does not depend upon the installation of LibreOffice proper (the
> redoubtable "stack"?).
>
> Perhaps my comprehension is faulty.

I guess you forgot to read man unoconv:

"unoconv uses the LibreOffice’s UNO bindings for non-interactive
conversion of documents and therefore needs an LibreOffice
instance to communicate with. Therefore if it cannot find one,
it will start its own instance for temporary usage."

Myself, I use paps and ps2pdf. paps has a few options that I use,
like margins and columns, and I get a few more obscure Unicode
characters rendered successfully using the Freemono fonts than
I get with cupsfilter, but that's probably because I haven't
studied how I could modify the latter's behaviour.

Cheers,
David.

Brian

unread,
Jan 7, 2018, 1:20:05 PM1/7/18
to
On Sun 07 Jan 2018 at 07:46:30 -0500, rhkr...@gmail.com wrote:

> I know this is not directly on point to the OP's question as subsequently

But it does treat conversion of files to PDFs, so you are not way off
base. Look at the variety of techniques people use: paps, a2ps,
enscript, cupsfilter, ps2pdf, unoconv etc. There probably isn't one
tried and trusted method which suits everyone; and we haven't exhausted
discussion of them all and how they could fit into a printing system.

> clarified, but I would point out that a variety of programs like txt2pdf exist
> (and work)--I assume, but don't know that they are available in the various
> Debian distros.

Which txt2pdf? I tried the DFSG free one at

https://github.com/baruchel/txt2pdf

Not in Debian, AFAICT, but download, put in /usr/local/bin and install
python-reportlab. Gives searchable PDFs, fonts can be selected more
easily than with cupsfilter or cups-pdf and it has UTF-8 support. Looks
useful.

--
Brian.

Brian

unread,
Jan 7, 2018, 3:10:05 PM1/7/18
to
Thanks, but I should have been clearer and more precise. I was after a
"one-step" utility which went directly from text to PDF. (cups-pdf gives
the appearence of doing that but it doesn't). You question its utility;
if pressed, I could agree with you.

I don't want to do through the intermediate Postscript production step.
But, perhaps, it is unavoidable.

--
Brian.

rhkr...@gmail.com

unread,
Jan 7, 2018, 3:30:04 PM1/7/18
to
On Sunday, January 07, 2018 01:15:06 PM Brian wrote:
> On Sun 07 Jan 2018 at 07:46:30 -0500, rhkr...@gmail.com wrote:

> > clarified, but I would point out that a variety of programs like txt2pdf
> > exist (and work)--I assume, but don't know that they are available in
> > the various Debian distros.
>
> Which txt2pdf?

Sorry, I don't remember--it must have been on a machine prior to my current
one (with Wheezy) as I don't see it--the prior machine was Lenny, iirc (Debain
5.n).

But my statement about variety of programs was more intended to say that there
are programs like txt2pdf, pdf2txt, html2txt (iirce)--in other words, such
programs to convert between a variety of formats.

john doe

unread,
Jan 7, 2018, 3:50:05 PM1/7/18
to
Why not (you can run both utilities in one go)?

--
John Doe

Brian

unread,
Jan 7, 2018, 4:00:06 PM1/7/18
to
You have completely missed the point. "one-step" and "directly" were
the clues.

--

Brian

David Wright

unread,
Jan 7, 2018, 10:20:04 PM1/7/18
to
Indeed. It seems a lot faster than paps+ps2pdf too. I can see myself
using this, though I'll keep my paps function as well, as it appears
to be able to make substitutions for missing glyphs. It's handy to
have a function that prints *something* at every position (except
the strip at 0x80), with those little blobs containing 4 hex chars
where there's no glyph. paps also does columns.

The default fault in txt2pdf is Courier→Nimbus Mono AFAICT, which is
very limited. The unifont TTF font has far more characters, but
the quality is very poor (deliberately, but looks like a bitmapped font).
I also haven't figured out line-numbering: I'll have to study the script.
Searchability is a useful extra (I'm used to just searching the original
text source file).

BTW a2ps, suggested earlier, is another that failed to move to Unicode
AIUI. A shame as it had lots of useful column/custom heading stuff.

Cheers,
David.

Curt

unread,
Jan 8, 2018, 4:40:05 AM1/8/18
to
On 2018-01-08, David Wright <deb...@lionunicorn.co.uk> wrote:
>>
>> Which txt2pdf? I tried the DFSG free one at
>>
>> https://github.com/baruchel/txt2pdf
>>
>> Not in Debian, AFAICT, but download, put in /usr/local/bin and install
>> python-reportlab. Gives searchable PDFs, fonts can be selected more
>> easily than with cupsfilter or cups-pdf and it has UTF-8 support. Looks
>> useful.
>
> Indeed. It seems a lot faster than paps+ps2pdf too. I can see myself
> using this, though I'll keep my paps function as well, as it appears
> to be able to make substitutions for missing glyphs. It's handy to
> have a function that prints *something* at every position (except
> the strip at 0x80), with those little blobs containing 4 hex chars
> where there's no glyph. paps also does columns.
>
> The default fault in txt2pdf is Courier→Nimbus Mono AFAICT, which is
> very limited. The unifont TTF font has far more characters, but
> the quality is very poor (deliberately, but looks like a bitmapped font).
> I also haven't figured out line-numbering: I'll have to study the script.
> Searchability is a useful extra (I'm used to just searching the original
> text source file).

It seems very swift. I tried line-numbering with the '--line-numbers'
argument, but got no line numbers (which is not what I was expecting).

Then I tried the '--page-numbers' argument, expecting to see page
numbers (and I did, centered at the bottom).

You can change the default font ('--font' or '-f' <full-path-to-ttf>,
but I'm sure you know that already).

> BTW a2ps, suggested earlier, is another that failed to move to Unicode
> AIUI. A shame as it had lots of useful column/custom heading stuff.
>
> Cheers,
> David.
>
>


john doe

unread,
Jan 8, 2018, 5:50:05 AM1/8/18
to
While I agree that I missunderstood what you were saying; not everyone
is fully acquainted with the language of Shakespeare even though the
list is in that language. :)
Why would I persuade you to do any thing; there are no good answers
except the one that suit you.

--
John Doe

Brian

unread,
Jan 8, 2018, 6:30:06 AM1/8/18
to
Fair enough.

> Why would I persuade you to do any thing; there are no good answers except
> the one that suit you.

Ouch! :)

--
Brian.

rhkr...@gmail.com

unread,
Jan 8, 2018, 7:30:06 AM1/8/18
to
On Sunday, January 07, 2018 03:28:25 PM rhkr...@gmail.com wrote:
> But my statement about variety of programs was more intended to say that
> there are programs like txt2pdf, pdf2txt, html2txt (iirce)--in other
> words, such programs to convert between a variety of formats.

For the sake of (an attempt at) completeness, there are also some similar
programs that use "to" instead of "2", e.g., pdftotext. (I don't know if
there is a texttopdf (or txttopdf).

Brian

unread,
Jan 8, 2018, 1:20:05 PM1/8/18
to
On Mon 08 Jan 2018 at 09:35:36 +0000, Curt wrote:

> On 2018-01-08, David Wright <deb...@lionunicorn.co.uk> wrote:
> >>
> >> Which txt2pdf? I tried the DFSG free one at
> >>
> >> https://github.com/baruchel/txt2pdf
> >>
> >> Not in Debian, AFAICT, but download, put in /usr/local/bin and install
> >> python-reportlab. Gives searchable PDFs, fonts can be selected more
> >> easily than with cupsfilter or cups-pdf and it has UTF-8 support. Looks
> >> useful.
> >
> > Indeed. It seems a lot faster than paps+ps2pdf too. I can see myself
> > using this, though I'll keep my paps function as well, as it appears
> > to be able to make substitutions for missing glyphs. It's handy to
> > have a function that prints *something* at every position (except
> > the strip at 0x80), with those little blobs containing 4 hex chars
> > where there's no glyph. paps also does columns.
> >
> > The default fault in txt2pdf is Courier→Nimbus Mono AFAICT, which is
> > very limited. The unifont TTF font has far more characters, but
> > the quality is very poor (deliberately, but looks like a bitmapped font).
> > I also haven't figured out line-numbering: I'll have to study the script.
> > Searchability is a useful extra (I'm used to just searching the original
> > text source file).
>
> It seems very swift. I tried line-numbering with the '--line-numbers'
> argument, but got no line numbers (which is not what I was expecting).

A possible bug. Not to worry; preprocess:

pr -n text.txt

> Then I tried the '--page-numbers' argument, expecting to see page
> numbers (and I did, centered at the bottom).

Ditto.

> You can change the default font ('--font' or '-f' <full-path-to-ttf>,
> but I'm sure you know that already).

Unlike David Wright, I've not noticed the font quality to be poor when
the magnification ability (left click with the mouse) of gv is used to
examine characters in the PDF.

--
Brian.

Brian

unread,
Jan 8, 2018, 2:00:06 PM1/8/18
to
The ones with "2" are from the ghostscript package. "to" is either from
poppler or cups or cups-filters.

--
Brian.

Brian

unread,
Jan 8, 2018, 2:20:05 PM1/8/18
to
On Sun 07 Jan 2018 at 21:18:09 -0600, David Wright wrote:

> BTW a2ps, suggested earlier, is another that failed to move to Unicode
> AIUI. A shame as it had lots of useful column/custom heading stuff.

a2ps. enscript and (possibly) paps date from the era (not so long ago)
when PostScript was COW (centre of the world). That is not to say they
have outlived their usfulness (yet) but things have moved on to PDF.
In particular, PDF is a pivotol aspect of the default printing system,
as well as many printers sold today. It also seems to be the common
way of distributing documents.

--
Brian.

rhkr...@gmail.com

unread,
Jan 8, 2018, 2:20:05 PM1/8/18
to
Thanks!

David Wright

unread,
Jan 8, 2018, 8:30:05 PM1/8/18
to
On Mon 08 Jan 2018 at 18:18:33 (+0000), Brian wrote:
> On Mon 08 Jan 2018 at 09:35:36 +0000, Curt wrote:
>
> > On 2018-01-08, David Wright <deb...@lionunicorn.co.uk> wrote:
> > >>
> > >> Which txt2pdf? I tried the DFSG free one at
> > >>
> > >> https://github.com/baruchel/txt2pdf
> > >>
> > >> Not in Debian, AFAICT, but download, put in /usr/local/bin and install
> > >> python-reportlab. Gives searchable PDFs, fonts can be selected more
> > >> easily than with cupsfilter or cups-pdf and it has UTF-8 support. Looks
> > >> useful.
> > >
> > > Indeed. It seems a lot faster than paps+ps2pdf too. I can see myself
> > > using this, though I'll keep my paps function as well, as it appears
> > > to be able to make substitutions for missing glyphs. It's handy to
> > > have a function that prints *something* at every position (except
> > > the strip at 0x80), with those little blobs containing 4 hex chars
> > > where there's no glyph. paps also does columns.
> > >
> > > The default fault in txt2pdf is Courier→Nimbus Mono AFAICT, which is
> > > very limited.

default.png attached¹. The majority of Unicode characters appear as
unadorned blobs.

> > > The unifont TTF font has far more characters, but
> > > the quality is very poor (deliberately, but looks like a bitmapped font).

unifont-ttf.png attached. AIUI the entire BMP (Basic Multilingual
Plane) is contained in less than 4MB and illustrated in a 2MB .BMP
(DIB) file, so the quality is, as I said, deliberately poor. Try
$ display /usr/share/fonts/opentype/unifont/unifont.ttf

> > > I also haven't figured out line-numbering: I'll have to study the script.
> > > Searchability is a useful extra (I'm used to just searching the original
> > > text source file).
> >
> > It seems very swift. I tried line-numbering with the '--line-numbers'
> > argument, but got no line numbers (which is not what I was expecting).
>
> A possible bug. Not to worry; preprocess:
>
> pr -n text.txt
>
> > Then I tried the '--page-numbers' argument, expecting to see page
> > numbers (and I did, centered at the bottom).
>
> Ditto.
>
> > You can change the default font ('--font' or '-f' <full-path-to-ttf>,
> > but I'm sure you know that already).
>
> Unlike David Wright, I've not noticed the font quality to be poor when
> the magnification ability (left click with the mouse) of gv is used to
> examine characters in the PDF.

What I was using with paps (and its maximum Unicode coverage when
diagnostic printing) is FreeMono, which appears to substitute unifont
characters where it needs to: freemono.png attached. This shows the
font itself, some hex blobs, and some unifont substitutions.
So *most* of a typical file will be printed with the quality of
$ display /usr/share/fonts/opentype/freefont/FreeMono.otf

Commenting on your other post, yes, it *would* be nice if paps were
papdf, but I merely have | ps2pdf - - at the end of the bash
function that sets the default font and margins etc to suit my
printer. So being Unicode-aware is far more important to me than
PS output.

¹ attachments are scrot screenshots of xpdf set to 600%, which limits
their crispness.

Cheers,
David.

Jason

unread,
Jan 8, 2018, 10:20:04 PM1/8/18
to
On Sun, Jan 07, 2018 at 10:10:50AM +0000, Brian wrote:
> On Sat 06 Jan 2018 at 20:45:01 -0600, Jason wrote:
>
> > > lpadmin. The wiki should help.
> >
> > I had looked into lpadmin and thought that might be what I need but
> > couldn't find in the man page how to add a printer. I don't have web
> > access so am asking here rather than looking on the wiki.
>
> For a PDF printer: install (or reinstall) printer-driver-cups-pdf.
> lpadmin automatically sets up a print queue.

That's what I needed, thanks.

>
> > So basically what I'm asking is how to add a printer (this could apply
> > to any printer, not just PDF) without needing to install a printer
> > configuration GUI.
>
> In general: lpadmin -p queue_name -v device_uri -E -m PPD.
>
> Obtain device_uri from 'lpinfo -v' and PPD from 'lpinfo -m'.
>
Also useful information, I saved this for future reference.

The possibilities for file conversion mentioned in this thread were
interesting to me, too. I may use some of the other ideas at some
point in the future.

Thank you Brian, and everyone else.

--
Jason

davidson

unread,
Jan 9, 2018, 1:30:05 AM1/9/18
to
On Sat, 6 Jan 2018, Brian wrote:

> On Sat 06 Jan 2018 at 05:54:00 +0100, john doe wrote:
>
>> On 1/6/2018 4:06 AM, Jason wrote:
>>> On a RasperryPi with Raspbian, I would like to create a PDF Printer to
>>> print files to. I only know how to do this with the GUI program
>>> system-config-printer but I don't want to install that on this
>>> Pi. What shell command do I need to create a PDF printer on the Pi (or
>>> on any Debian, for that matter)?
>>>
>>
>> Why do you want to"print" if you can convert to pdf using the command line?
>> Based on the original file extension you simply search for a utility that
>> will convert your original file to pdf.
>
> How does one convert a text file to a PDF using the command line?

Step 1. Make sure the text file is also a latex document.
Step 2. Use latex.

Ionel Mugurel Ciobîcă

unread,
Jan 9, 2018, 4:30:05 AM1/9/18
to
On 9-01-2018, at 06h 52'16", davidson wrote about "Re: How to create a PDF-Printer from the command line"
> >
> >How does one convert a text file to a PDF using the command line?
>
> Step 1. Make sure the text file is also a latex document.
> Step 2. Use latex.

Assuming step 1 is reached, step 2 will make a dvi file. That was not
what the OP asked for...


Ionel

to...@tuxteam.de

unread,
Jan 9, 2018, 4:40:06 AM1/9/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Use pdflatex. Or lualatex. Or whatever. In any case, whoever is
going that path (it *does* have its upsides: I'm going it all the
time) knows that already.

Cheers
- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpUjUgACgkQBcgs9XrR2kZ0WACffnNH8gdYmEmZwIAHiN98xlzE
HSUAmgOclsjR9e4tXppAFlh+O/oqv1Iy
=km+c
-----END PGP SIGNATURE-----

Curt

unread,
Jan 9, 2018, 5:30:05 AM1/9/18
to
pdftex (your one step road to a pdf)?

DESCRIPTION
Run the pdfTeX typesetter on file, usually creating
file.pdf.

Of course, B. will inform us that this amounts to eradicating a microbe with a
gorilla (or vice-versa?).

>
> Ionel
>
>


--
"Ruling a large nation is like cooking a small fish" - Lao Tzu

Curt

unread,
Jan 9, 2018, 8:10:06 AM1/9/18
to
If you're dealing with latex files, as I have taken some minutes to
discover (cough), you need 'pdflatex', not pdftex, which will barf
immediately upon encountering latex commands.

David Wright

unread,
Jan 9, 2018, 8:40:06 AM1/9/18
to
So could you now elaborate on step 1 of this "one-step" process?

Cheers,
David.

Ionel Mugurel Ciobîcă

unread,
Jan 9, 2018, 9:00:06 AM1/9/18
to
On 9-01-2018, at 13h 06'44", Curt wrote about "Re: How to create a PDF-Printer from the command line"
> On 2018-01-09, Curt <cu...@free.fr> wrote:
> > On 2018-01-09, Ionel Mugurel Ciobîcă <I.M.C...@upcmail.nl> wrote:
> >> On 9-01-2018, at 06h 52'16", davidson wrote about "Re: How to create a PDF-Printer from the command line"
> >>> >
> >>> >How does one convert a text file to a PDF using the command line?
> >>>
> >>> Step 1. Make sure the text file is also a latex document.
> >>> Step 2. Use latex.
> >>
> >> Assuming step 1 is reached, step 2 will make a dvi file. That was not
> >> what the OP asked for...
> >
> > pdftex (your one step road to a pdf)?
> >
> If you're dealing with latex files, as I have taken some minutes to
> discover (cough), you need 'pdflatex', not pdftex, which will barf
> immediately upon encountering latex commands.
>

I use LaTeX since 1997, so I know about this.

Dealing with step 1, if a text document is properly formated one can
use txt2html to generate a html file which then can be converted
further directly into LaTeX with gnuhtml2latex, or into ps (with
html2ps), pdf (with wkhtmltopdf), etc. I did not use any of the above,
so I can't say how good they work.

Converting text to pdf is something I did many times. In the past I
used enscript and a2ps. Then I experimented with u2ps and cedilla. I
use lately paps (with --font="Freemono 12") for a good UTF-8 coverage.
Of course I go via postscript, but that is not so bad.

In a single go, a pdf printer may still be the only option.


Ionel

Ionel Mugurel Ciobîcă

unread,
Jan 9, 2018, 10:10:07 AM1/9/18
to
On 9-01-2018, at 07h 35'28", David Wright wrote about "Re: How to create a PDF-Printer from the command line"
For the non-LaTeX people, the easiest is to create a tex file (or ltx,
if you wish) that contains this lines before the text:

\documentclass[a4paper,12pt]{article}
\pagenumbering{arabic}
\begin{document}

and this line after the text:

\end{document}

LaTeX will ignore spaces and newlines. So if you wish a newline
somewhere you need to add it with \newline. If the text is UTF-8, add
also \usepackage[utf8x]{inputenc} before \begin{document}.
You can make a title with:
\title{Title}
\author{Me}
\date{\today}
\maketitle
after \begin{document}

``%'' is a comment in LaTeX, so you need to escaped it with \. The
same goes for few other characters (&, etc.) $ and [ are for
equations. < and > will not work, use $<$ and $>$ instead.

Good luck.

Ionel

P.S. I am not the one that suggested to go via LaTeX!

Curt

unread,
Jan 9, 2018, 10:40:06 AM1/9/18
to
On 2018-01-09, David Wright <deb...@lionunicorn.co.uk> wrote:
>>
>> If you're dealing with latex files, as I have taken some minutes to
>> discover (cough), you need 'pdflatex', not pdftex, which will barf
>> immediately upon encountering latex commands.
>
> So could you now elaborate on step 1 of this "one-step" process?
>
> Cheers,
> David.
>
>

Like the other, more knowledgeable guy said.

pdftex will actually create a pdf out of a text file without complaint
if you put '\end' on a newline at the end of the text file (I wouldn't
recommend such a bare-bones approach, though, in my extremely limited
experience, for formatting reasons). Or you can just type '\end' in the
little interactive mode that comes up in the terminal when errors or
omissions are encountered.

All roads lead to Rome, I reckon.

Brian

unread,
Jan 9, 2018, 2:20:06 PM1/9/18
to
On Tue 09 Jan 2018 at 15:29:08 +0000, Curt wrote:

> On 2018-01-09, David Wright <deb...@lionunicorn.co.uk> wrote:
> >>
> >> If you're dealing with latex files, as I have taken some minutes to
> >> discover (cough), you need 'pdflatex', not pdftex, which will barf
> >> immediately upon encountering latex commands.
> >
> > So could you now elaborate on step 1 of this "one-step" process?
> >
> > Cheers,
> > David.
> >
> >
>
> Like the other, more knowledgeable guy said.

There are quite a few in this thread. Clue us in?

> pdftex will actually create a pdf out of a text file without complaint
> if you put '\end' on a newline at the end of the text file (I wouldn't
> recommend such a bare-bones approach, though, in my extremely limited
> experience, for formatting reasons). Or you can just type '\end' in the
> little interactive mode that comes up in the terminal when errors or
> omissions are encountered.

My pdftex complained madly about this and eventually threw the towel in.

> All roads lead to Rome, I reckon.

You always learn something new on this list. I thought it was Grimsby.

--
Brian.

Curt

unread,
Jan 9, 2018, 2:50:06 PM1/9/18
to
On 2018-01-09, Brian <ad...@cityscape.co.uk> wrote:
>> >>
>> >> If you're dealing with latex files, as I have taken some minutes to
>> >> discover (cough), you need 'pdflatex', not pdftex, which will barf
>> >> immediately upon encountering latex commands.
>> >
>> > So could you now elaborate on step 1 of this "one-step" process?
>> >
>> > Cheers,
>> > David.
>>
>> Like the other, more knowledgeable guy said.
>
> There are quite a few in this thread. Clue us in?

The person who responded directly to David's question quoted above,
whose name, exotic in the regions from which I hail, escapes my
remembrance.

>> pdftex will actually create a pdf out of a text file without complaint
>> if you put '\end' on a newline at the end of the text file (I wouldn't
>> recommend such a bare-bones approach, though, in my extremely limited
>> experience, for formatting reasons). Or you can just type '\end' in the
>> little interactive mode that comes up in the terminal when errors or
>> omissions are encountered.
>
> My pdftex complained madly about this and eventually threw the towel
> in.

I can't account for it. If I feed pdftex a latex file, it whines for
every latex command it encounters, but if I press enter on each
encountered command error in the interactive console (if that is indeed
the term for it) it eventually exits completely (maybe it wants me to
'\end') , producing a pdf file (the text of which comprises both the
unknown latex commands as plain old text as well as the text as, well,
pdf-style text, if you catch my drift).

>> All roads lead to Rome, I reckon.
>
> You always learn something new on this list. I thought it was Grimsby.
>


--

David Wright

unread,
Jan 9, 2018, 3:10:05 PM1/9/18
to
For me, this is a new take on document conversion methods.

FWIW my test file produced 30819 "Missing character" errors which
is hardly surprising as TeX was released 40 years ago in the days
of 7 bit ASCII. The PDF had a single line of characters running
off the right hand side of the page.

Cheers,
David.

Brian

unread,
Jan 9, 2018, 3:40:06 PM1/9/18
to
On Tue 09 Jan 2018 at 19:41:35 +0000, Curt wrote:

> On 2018-01-09, Brian <ad...@cityscape.co.uk> wrote:
> >
> > There are quite a few in this thread. Clue us in?
>
> The person who responded directly to David's question quoted above,
> whose name, exotic in the regions from which I hail, escapes my
> remembrance.

I am not playing your games. Exotica is outside my field of interest.
Your private life is your own.

> >> pdftex will actually create a pdf out of a text file without complaint
> >> if you put '\end' on a newline at the end of the text file (I wouldn't
> >> recommend such a bare-bones approach, though, in my extremely limited
> >> experience, for formatting reasons). Or you can just type '\end' in the
> >> little interactive mode that comes up in the terminal when errors or
> >> omissions are encountered.
> >
> > My pdftex complained madly about this and eventually threw the towel
> > in.
>
> I can't account for it. If I feed pdftex a latex file, it whines for
> every latex command it encounters, but if I press enter on each
> encountered command error in the interactive console (if that is indeed
> the term for it) it eventually exits completely (maybe it wants me to
> '\end') , producing a pdf file (the text of which comprises both the
> unknown latex commands as plain old text as well as the text as, well,
> pdf-style text, if you catch my drift).

Oh. pdftex is now being fed a latex file, not a plain text file.
Previously:

> pdftex will actually create a pdf out of a *text file*....

(The "*"s are mine. Just in case you fail to notice we are talking
about different things).

Boats. Midstrem.

> >> All roads lead to Rome, I reckon.
> >
> > You always learn something new on this list. I thought it was Grimsby.

On second thoughts (everyone can have them), maybe it was Scunthorpe.

--
Brian.

Brian

unread,
Jan 9, 2018, 3:50:05 PM1/9/18
to
On Tue 09 Jan 2018 at 20:36:23 +0000, Brian wrote:

> I am not playing your games. Exotica is outside my field of interest.
^
^are

Just in case there is a nitpick.

--
Brian.

Curt

unread,
Jan 9, 2018, 4:00:04 PM1/9/18
to
On 2018-01-09, Brian <ad...@cityscape.co.uk> wrote:
>
> Oh. pdftex is now being fed a latex file, not a plain text file.
> Previously:
>
> > pdftex will actually create a pdf out of a *text file*....

That's right. No contradiction. I just added a data point.

to...@tuxteam.de

unread,
Jan 9, 2018, 5:30:05 PM1/9/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, Jan 09, 2018 at 02:09:22PM -0600, David Wright wrote:

[...]

> For me, this is a new take on document conversion methods.
>
> FWIW my test file produced 30819 "Missing character" errors which
> is hardly surprising as TeX was released 40 years ago in the days
> of 7 bit ASCII. The PDF had a single line of characters running
> off the right hand side of the page.

Modern (La)TeX implementations should be able to cope with UTF-8
input: \usepackage[utf8]{inputenc} is one of the recommended magic
incantations (that said, reportedly Lua(La)TeX and Xe(La)TeX are
said to cope even better; they are part of your TeX live distribution
anyway).

Of course you have to make sure that your font supports the glyphs
you actually use.

Cheers
- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpVQEcACgkQBcgs9XrR2kZ8cwCfXZs1xEtELlztcEcAQ0Qxyk6q
Q+sAniPTYAwwYqfB8M1BZMIZjYOUlbBN
=Dtm9
-----END PGP SIGNATURE-----

David Wright

unread,
Jan 9, 2018, 5:40:04 PM1/9/18
to
On Tue 09 Jan 2018 at 20:47:44 (+0000), Curt wrote:
> On 2018-01-09, Brian <ad...@cityscape.co.uk> wrote:
> >
> > Oh. pdftex is now being fed a latex file, not a plain text file.
> > Previously:
> >
> > > pdftex will actually create a pdf out of a *text file*....
>
> That's right. No contradiction. I just added a data point.

Perhaps before this protracted, tangential and niggling subthread
becomes acrimonious or invidious, it might be easier to just state
than TeX/LaTeX is a useless way to turn a *text* file into a PDF.
And that's without discussing whether having to install a TeX system
is any better than installing LibreOffice.

Cheers,
David.

David Wright

unread,
Jan 9, 2018, 6:10:04 PM1/9/18
to
On Tue 09 Jan 2018 at 23:20:55 (+0100), to...@tuxteam.de wrote:
> On Tue, Jan 09, 2018 at 02:09:22PM -0600, David Wright wrote:
>
> [...]
>
> > For me, this is a new take on document conversion methods.
> >
> > FWIW my test file produced 30819 "Missing character" errors which
> > is hardly surprising as TeX was released 40 years ago in the days
> > of 7 bit ASCII. The PDF had a single line of characters running
> > off the right hand side of the page.
>
> Modern (La)TeX implementations should be able to cope with UTF-8
> input: \usepackage[utf8]{inputenc} is one of the recommended magic
> incantations (that said, reportedly Lua(La)TeX and Xe(La)TeX are
> said to cope even better; they are part of your TeX live distribution
> anyway).
>
> Of course you have to make sure that your font supports the glyphs
> you actually use.

Thanks. I feel ashamed that I'm wasting your time, and that of
Ionel Mugurel Ciobîcă. I'm only testing the methods being suggested
here for conversion. It's proved valuable (for me) as I hadn't come
across txt2pdf before, which is already wrapped up in my .bashrc,
but one does see poor suggestions as well as good ones.

I've been a LaTeX user for over 30 years, so I've stripped out more
Unicode workarounds than I care to mention over the years. Remember
this sort of stuff?

\catcode`Æ=13 \defÆ{\AE}% handle Æ
\catcode`æ=13 \defæ{\ae}% handle æ
\catcode`ß=13 \defß{\ss}% handle ß
\catcode`è=13 \defè{\`e}% handle è
\catcode`Î=13 \defÎ{\^I}% handle Î
\catcode`Œ=13 \defŒ{\OE}% handle Œ
\catcode`œ=13 \defœ{\oe}% handle œ
\catcode`Ř=13 \defŘ{\v R}% handle Ř
\catcode`ř=13 \defř{\v r}% handle ř

I do remember \usepackage[utf8]{inputenc}, but even that has been
superceded by \RequirePackage{fontspec} (yes, I moved on from
using .sty files to .cls files about 5 years ago).

Apologies again.

Cheers,
David.

to...@tuxteam.de

unread,
Jan 9, 2018, 6:50:05 PM1/9/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, Jan 09, 2018 at 05:01:50PM -0600, David Wright wrote:
> On Tue 09 Jan 2018 at 23:20:55 (+0100), to...@tuxteam.de wrote:

[...]

> > Modern (La)TeX implementations should be able to cope with UTF-8

[...]

> Thanks. I feel ashamed that I'm wasting your time, and that of
> Ionel Mugurel Ciobîcă.

No worries: people here are not forced to answer :-)

> I'm only testing the methods being suggested
> here for conversion. It's proved valuable (for me) as I hadn't come
> across txt2pdf before, which is already wrapped up in my .bashrc,
> but one does see poor suggestions as well as good ones.
>
> I've been a LaTeX user for over 30 years, so I've stripped out more
> Unicode workarounds than I care to mention over the years. Remember
> this sort of stuff?
>
> \catcode`Æ=13 \defÆ{\AE}% handle Æ
> \catcode`æ=13 \defæ{\ae}% handle æ

[...]

You bet. My first TeX was on an Atari ST ($DEITY knows when: Wikipedia
says between 1985 and 1993). LaTeX wasn't around. UTF-8 much less.
Heck, even Latin-1 wasn't quite there, German tended to cannibalize
several ASCII codes for its special characters (you had to decide
whether you wanted '{' or 'ä' or something). The "upper half" tended
to be some weird graphical chars.

Oh, yes, I do remember. Dimly, though.

> I do remember \usepackage[utf8]{inputenc}, but even that has been
> superceded by \RequirePackage{fontspec} (yes, I moved on from
> using .sty files to .cls files about 5 years ago).

OK.

> Apologies again.

Again, no worries.

Cheers
- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpVUt8ACgkQBcgs9XrR2kaydQCaAlFVPsFZPn6AnEoXAWR+wkkx
mqMAn0HKgmqMjjMgPrH0reTz/SKIlC9m
=EFrm
-----END PGP SIGNATURE-----

Brian

unread,
Jan 9, 2018, 7:40:05 PM1/9/18
to
I am not attempting to dissuade anyone from using paps but trying to
explore why what appears to be a relatively simple process (text to PDF)
has so few utilities. Without a UTF-8 requirement we are awash with
Postscript programs, but not so with direct conversion to PDF. Having
said that, I have no deep understanding of either the PostScript or PDF
format, so perhaps it is more difficult than I imagine; especially when
it comes to producing an output with, for example, columns, headers etc.

However, there are utilities which can help with preprocessing a text
file beforehand.

As a UTF-8 Debian alternative to txt2pdf:

Create $HOME/.config/fontconfig/fonts.conf with the contents

<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<alias>
<family>monospace</family>
<prefer>
<family>freemono</family>
</prefer>
</alias>
</fontconfig>

(Is there anything better than FreeMono's UTF-8 glyph coverage?)

Then

CHARSET=utf-8 /usr/lib/cups/filter/texttopdf 1 1 1 1 1 < text.txt> > out.pdf

out.pdf is not searchable, so continue with

pdftocairo -pdf out.pdf searchable.pdf

A bonus is that searchable.pdf is about seven times smaller than out.pdf.

--
Brian.

Curt

unread,
Jan 10, 2018, 4:10:05 AM1/10/18
to
On 2018-01-09, David Wright <deb...@lionunicorn.co.uk> wrote:
>
> Perhaps before this protracted, tangential and niggling subthread
> becomes acrimonious or invidious, it might be easier to just state

Well, it did, through no fault of mine own, I must say, turn both
acrimonious and invidious. I have been neither (up till now).

Someone: use latex (to produce a pdf).
Someone else: That will produce a dvi-not what the OP wanted--two-step
rather than one-step operation.
Me: not so, use pdftex--one step operation (from *latex* to pdf).
Me correcting me: actually, you need pdflatex for *latex*.
You (a thirty-year veteran latex user we learn elsewhere): Please explain the
first step (which is how to create a latex file).

> than TeX/LaTeX is a useless way to turn a *text* file into a PDF.

Sure. However no one suggested that, did they? Did you think *I* was
suggesting that as a viable method or approach? I wasn't; I'm sorry for
your confusion; I was merely surprised to get legible pdf output from a
text file with pdftex (I think I did have to type '\end' in the
interactive console, though, maybe). YMV (Your Mileage Varied). That
particular tangent I meant only as an aside, for the pure scientific
interest of the thing. I mean in the interest of experimentation. (I
only went down that experimental road because when I wrong-headedly ran
pdftex on a latex file I had on hand and cycled through all the errors,
it did spit out a readable pdf file).

> And that's without discussing whether having to install a TeX system
> is any better than installing LibreOffice.

Yes, I know, you're all flying to the moon in 1969 and must fit
everything into a kilobyte or two.

But I did foresee this objection with my Gorilla-microbe metaphor,
although I needn't have done so as I was not the one to make the original
suggestion of latex for the production of pdfs in the first place.

> Cheers,
> David.

to...@tuxteam.de

unread,
Jan 10, 2018, 4:50:05 AM1/10/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, Jan 10, 2018 at 09:02:08AM +0000, Curt wrote:

[...]

> You (a thirty-year veteran latex user we learn elsewhere): Please explain the
> first step (which is how to create a latex file).
>
> > than TeX/LaTeX is a useless way to turn a *text* file into a PDF.

Perhaps the conflict can be tackled better by making something
explicit what has been implicit all along: is (La)TeX text, or
is it not? Both standpoints exist, and this dichotomy can even
be seen in the mime types database y'all have received as a
present with your distribution; but note some peculiarity:

tomas@trotzki:~$ egrep '\<(la)?tex\>' /etc/mime.types
application/x-latex latex
application/x-tex-gf gf
application/x-tex-pk pk
text/vnd.latex-z
text/x-tex tex ltx sty cls

Note something? (I'm ignoring for the moment .gf and .pk, which
arguably can be seen as binary for now). LaTeX goes under the
umbrella "application" (ain't text) and tex (with its LaTeX
sibling sty and LaTeX-3 sibling .cls) under "text". What is
going on here?

A couple of years ago, XML seemed like the promise from the
future, I was looking how to serve XML files over HTTP. For me
(coming from a more Unix-y and free background) it was clearly
"text": everything I can grab at with my vi(m) or Emacs is text,
and XML, though slightly disgusting at times, fell squarely into
that category. So "text/xml" shall it be.

My surprise was not small when I realized that clients from the
Microsoft camp nearly freaked out at that proposal. They wanted
"application/xml".

To me, this was a clear culture clash between a DIY, think-for-
yourself culture (hey, it's text: just throw an editor at it
and hack at it) versus an authoritarian one ("only use
with vendor-approved software; warranty void, and if we ever
find out how (DMCA?), we'll send the black helicopters"[1])

Yeah. Both standpoints exist, and I think we should respect
each other and just acknowledge the difference.

But for me, (La)TeX *are* text :-)

Cheers

[1] Here you notice I've some bias in that: but then, I hope
you noticed a while ago :)

- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpV4QMACgkQBcgs9XrR2kZDSQCdHvediP09+YSOT8UNBbaFrDwh
RuUAn3YrNZYQuEcEk08miQnZ1CB4Puiy
=qM+v
-----END PGP SIGNATURE-----

Brian

unread,
Jan 10, 2018, 5:40:04 AM1/10/18
to
On Wed 10 Jan 2018 at 09:02:08 +0000, Curt wrote:

> On 2018-01-09, David Wright <deb...@lionunicorn.co.uk> wrote:

[...]

> > And that's without discussing whether having to install a TeX system
> > is any better than installing LibreOffice.
>
> Yes, I know, you're all flying to the moon in 1969 and must fit
> everything into a kilobyte or two.
>
> But I did foresee this objection with my Gorilla-microbe metaphor,
> although I needn't have done so as I was not the one to make the original
> suggestion of latex for the production of pdfs in the first place.

Given a willingness to devote the necessary resources to the task, a
decent case can be made for using unoconv to convert text (and other
document types) to PDF. The method relies on starting unoconv as a
listener in the background (unconv -l &). X is not required.

Quite a sophistcated conversion engine can be constructed; text, MS
Word, ODT. RTF etc to PDF. Printing to a real printer or to file can
also be built into the system. I sometimes wonder whether this is
used as the basis for some of the online conversion services.

Simplicity, txt2pdf et al, can be an advantage but having multiple
methods at hand cannot be bad.

--
Brian.

Ionel Mugurel Ciobîcă

unread,
Jan 10, 2018, 8:20:05 AM1/10/18
to
On 10-01-2018, at 10h 46'43", to...@tuxteam.de wrote about "Re: How to create a PDF-Printer from the command line"
>
> But for me, (La)TeX *are* text :-)
>

+1


Ionel

David Wright

unread,
Jan 10, 2018, 10:40:06 AM1/10/18
to
Yes, you can put me in the text camp, and a little further over
than you, I believe. I have several extensions in my ~/.mime-types
which are set to "text/plain" which means (in mutt, anyway) that
such attachments are displayed just as if they were email text.
latex wasn't in that list only because I'm not in the habit of
attaching LaTeX files—I would attach the generated PDF. But thanks
for the heads-up.

>From the point of view of the OP and this thread, it's self-evident
that the output PDF was required to contain a listing of the text
file itself, *not* the results of its interpretation by software
that just happens to produce PDF-formatted files amongst its output.

A possibly sensible method involving LaTeX (if that happened to be
installed on the target system) might be to post some boilerplate
code that could be used to top and tail the original file; this
would likely involve \verbatim directives and suchlike. Not having
used it like that, I have no idea how generalised that could be made.

But that's very different from suggesting feeding the raw file to
fooLaTeX and seeing what comes out of the other end. Different, too,
from saying that if it happens to be valid LaTeX code, process it
and print the output instead. The latter might produce

2. PROGRAM SUMMARY

in 30 point bold Palatino, whereas a listing should contain a line like

\mychapter{Program summary}

I might just mention that many of us will have experience of
pretty-printing (IIRC a2ps can do that) which is a halfway house:
suitable text files are constrained in their contents (typically
they are source code to some programming language) but processed
by the printing system to improve its readability. It's certainly
*not* processed by the target interpreter/compiler.

Cheers,
David.

to...@tuxteam.de

unread,
Jan 10, 2018, 11:20:06 AM1/10/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, Jan 10, 2018 at 09:38:47AM -0600, David Wright wrote:
> On Wed 10 Jan 2018 at 10:46:43 (+0100), to...@tuxteam.de wrote:

[text, yeah!]

> > But for me, (La)TeX *are* text :-)
>
> Yes, you can put me in the text camp, and a little further over
> than you, I believe [...]

Phew! I'm not alone, it seems :)

[...]

> But that's very different from suggesting feeding the raw file to
> fooLaTeX and seeing what comes out of the other end. Different, too,
> from saying that if it happens to be valid LaTeX code, process it
> and print the output instead. The latter might produce

And to be fair, results can be pretty exciting, depending on the
actual file content...

I would go with a2ps, too, btw.

Downside is that it does the panoramic tour via PS and thus generates
fairly hefty PDFs.

Cheers
- -- t
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpWO5UACgkQBcgs9XrR2kYkdACffVJuo+aQoDqLeQ0Y9ZsNYuwU
Ph8An2SrNW/ovzzbDsKoLfnGBfy2rBFh
=Juy8
-----END PGP SIGNATURE-----

Brian

unread,
Jan 10, 2018, 2:20:06 PM1/10/18
to
On Wed 10 Jan 2018 at 17:13:09 +0100, to...@tuxteam.de wrote:

> On Wed, Jan 10, 2018 at 09:38:47AM -0600, David Wright wrote:
>
> And to be fair, results can be pretty exciting, depending on the
> actual file content...
>
> I would go with a2ps, too, btw.
>
> Downside is that it does the panoramic tour via PS and thus generates
> fairly hefty PDFs.

With plain text files as an input?

brian@desktop3:~$ a2ps /etc/mime.types -o output1.ps
[/etc/mime.types (plain): 15 pages on 8 sheets]
[Total: 15 pages on 8 sheets] saved into the file `output1.ps'
[73 lines wrapped]

brian@desktop3:~$ ps2pdf output1.ps

brian@desktop3:~$ txt2pdf.py /etc/mime.types -f /usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf -o output2.pdf
Writing '/etc/mime.types' with 80 characters per line and 60 lines per page...
PDF document: 14 pages

brian@desktop3:~$ /usr/sbin/cupsfilter /etc/mime.types > output3.pdf

brian@desktop3:~$ ls -l
-rw-r--r-- 1 brian brian 28853 Jan 10 18:58 output1.pdf
-rw-r--r-- 1 brian brian 66557 Jan 10 18:58 output1.ps
-rw-r--r-- 1 brian brian 36221 Jan 10 18:58 output2.pdf
-rw-r--r-- 1 brian brian 219186 Jan 10 18:59 output3.pdf

output3.pdf contains the complete DejaVuSansMono glyph set.

The finger is often pointed at ps2pdf as a file bloating command.
Unjustifiably, it would seem, in this case. A counter example with a
text file?

--
Brian.

Brian

unread,
Jan 10, 2018, 2:30:04 PM1/10/18
to
Another plus for unoconv is that it will process a text file to produce
a PDF/A compliant file. If Latex was not completely out of the running,
it is now. (Whether or not it is regarded, irrelevantly, as text).

--
Brian.

to...@tuxteam.de

unread,
Jan 10, 2018, 3:10:06 PM1/10/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, Jan 10, 2018 at 07:10:03PM +0000, Brian wrote:
> On Wed 10 Jan 2018 at 17:13:09 +0100, to...@tuxteam.de wrote:

[...]

> > Downside is that it does the panoramic tour via PS and thus generates
> > fairly hefty PDFs.
>
> With plain text files as an input?

[...]

> brian@desktop3:~$ ls -l
> -rw-r--r-- 1 brian brian 28853 Jan 10 18:58 output1.pdf
> -rw-r--r-- 1 brian brian 66557 Jan 10 18:58 output1.ps
> -rw-r--r-- 1 brian brian 36221 Jan 10 18:58 output2.pdf
> -rw-r--r-- 1 brian brian 219186 Jan 10 18:59 output3.pdf
>
> output3.pdf contains the complete DejaVuSansMono glyph set.
>
> The finger is often pointed at ps2pdf as a file bloating command.
> Unjustifiably, it would seem, in this case. A counter example with a
> text file?

Thanks for actually trying out. I stand corrected...

Cheers
- -- t
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpWcQkACgkQBcgs9XrR2kZ3tQCeM637S9JsPZ2AKPdGokY2cfX/
HzAAn3w+4RrmUSSTAWlbeTreu7S+nUVI
=2Q9W
-----END PGP SIGNATURE-----

Brian

unread,
Jan 17, 2018, 1:30:06 PM1/17/18
to
On Wed 10 Jan 2018 at 21:01:13 +0100, to...@tuxteam.de wrote:

> On Wed, Jan 10, 2018 at 07:10:03PM +0000, Brian wrote:
> > On Wed 10 Jan 2018 at 17:13:09 +0100, to...@tuxteam.de wrote:
>
> [...]
>
> > > Downside is that it does the panoramic tour via PS and thus generates
> > > fairly hefty PDFs.
> >
> > With plain text files as an input?
>
> [...]
>
> > brian@desktop3:~$ ls -l
> > -rw-r--r-- 1 brian brian 28853 Jan 10 18:58 output1.pdf
> > -rw-r--r-- 1 brian brian 66557 Jan 10 18:58 output1.ps
> > -rw-r--r-- 1 brian brian 36221 Jan 10 18:58 output2.pdf
> > -rw-r--r-- 1 brian brian 219186 Jan 10 18:59 output3.pdf
> >
> > output3.pdf contains the complete DejaVuSansMono glyph set.
> >
> > The finger is often pointed at ps2pdf as a file bloating command.
> > Unjustifiably, it would seem, in this case. A counter example with a
> > text file?
>
> Thanks for actually trying out. I stand corrected...

A gracious response. However, my data were in the context of using a2ps
to go from text to PS. Your "hefty PDFs" would be entirely correct if
paps had been used for the conversion. The result is a 156551 sized file
for me. gs2pdf comes up with a whopping 11540827 and takes 18 s to do so.

An explantation is given by a Ghostscript developer at

https://stackoverflow.com/questions/26066535/ps2pdf-creates-a-very-big-pdf-file-from-paps-created-ps-file

The same page has

> As the author of paps,...I have just released a new version of paps...

The new version's README has

> paps is a command line program for converting Unicode text
> encoded in UTF-8 to postscript and pdf by using pango.

That was in 2015, Debian's paps does not relect the existence of a
7.0 version. I wonder why?

--
Brian.

to...@tuxteam.de

unread,
Jan 17, 2018, 3:10:05 PM1/17/18
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, Jan 17, 2018 at 06:21:31PM +0000, Brian wrote:
> On Wed 10 Jan 2018 at 21:01:13 +0100, to...@tuxteam.de wrote:

[...]

> > Thanks for actually trying out. I stand corrected...
>
> A gracious response. However, my data were in the context of using a2ps
> to go from text to PS. Your "hefty PDFs" would be entirely correct if
> paps had been used for the conversion. The result is a 156551 sized file
> for me. gs2pdf comes up with a whopping 11540827 and takes 18 s to do so.

Heh. I've been called all sort of names, but gracious... :-)

> That was in 2015, Debian's paps does not relect the existence of a
> 7.0 version. I wonder why?

Yes, that might be the root of my dim memories.

Thanks for your thorough investigation

cheers
- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpfrDMACgkQBcgs9XrR2kYFGACeL5J7MFpQSDa3F96kHNjq6D7H
DFMAn0w5jLhnXV492+EJQeORz3LhQqoX
=t1V6
-----END PGP SIGNATURE-----

Joel Wirāmu Pauling

unread,
Jan 17, 2018, 4:20:06 PM1/17/18
to
**cough** $convert

imagemagick

$convert somefile.whatever somefile.pdf

---

Ben Caradoc-Davies

unread,
Jan 17, 2018, 4:40:06 PM1/17/18
to
On 18/01/18 10:15, Joel Wirāmu Pauling wrote:
> **cough** $convert
> imagemagick
> $convert somefile.whatever somefile.pdf

+1 for ImageMagick convert to generate PDFs from scanned pages (images).
I found that this works best with -page and -density specified. However,
I have not tried using it for text, which I think was the problem facing
the original poster.

Kind regards,

--
Ben Caradoc-Davies <b...@transient.nz>
Director
Transient Software Limited <https://transient.nz/>
New Zealand

Joel Wirāmu Pauling

unread,
Jan 17, 2018, 4:40:06 PM1/17/18
to
Works fine for txt, although as it rasterizes things it's not going to be optimized for size.

Ben Caradoc-Davies

unread,
Jan 17, 2018, 4:50:05 PM1/17/18
to
On 18/01/18 10:37, Joel Wirāmu Pauling wrote:
> Works fine for txt, although as it rasterizes things it's not going to be
> optimized for size.

Yes, typically, but for large fonts and low resolution outputs with few
pages, rasterised pages may be smaller.

Chris Ramsden

unread,
Jan 17, 2018, 4:50:05 PM1/17/18
to
On 17/01/18 21:42, Ben Caradoc-Davies wrote:
> On 18/01/18 10:37, Joel Wirāmu Pauling wrote:
>> Works fine for txt, although as it rasterizes things it's not going to be
>> optimized for size.
>
> Yes, typically, but for large fonts and low resolution outputs with few pages, rasterised pages may be smaller.
>
> Kind regards,
>
If I feed it a text file, it gives me an error:

convert: improper image header `self_spam.txt' @ error/txt.c/ReadTXTImage/439.

What's the trick to making it work with a text file as input?
--
Chris

Dan Ritter

unread,
Jan 17, 2018, 5:10:05 PM1/17/18
to
Use pandoc instead.


-dsr-

Brian

unread,
Jan 17, 2018, 5:50:04 PM1/17/18
to
On Wed 17 Jan 2018 at 21:04:03 +0100, to...@tuxteam.de wrote:

> On Wed, Jan 17, 2018 at 06:21:31PM +0000, Brian wrote:
> > On Wed 10 Jan 2018 at 21:01:13 +0100, to...@tuxteam.de wrote:
>
> [...]
>
> > > Thanks for actually trying out. I stand corrected...
> >
> > A gracious response. However, my data were in the context of using a2ps
> > to go from text to PS. Your "hefty PDFs" would be entirely correct if
> > paps had been used for the conversion. The result is a 156551 sized file
> > for me. gs2pdf comes up with a whopping 11540827 and takes 18 s to do so.
>
> Heh. I've been called all sort of names, but gracious... :-)
>
> > That was in 2015, Debian's paps does not relect the existence of a
> > 7.0 version. I wonder why?
>
> Yes, that might be the root of my dim memories.
>
> Thanks for your thorough investigation

Anyone can take part. Supporting data is always an advantage,

Anyway, I think I'll abandon my flirtation with paps and return to
something more suitable for efficiently converting text to a searchable
PDF.

--
Brian.

Brian

unread,
Jan 17, 2018, 6:00:04 PM1/17/18
to
I'd like to nominate this response as the most useless of 2018.

"it" obviously refers to the imagemagick convert utility.

--
Brian.

Brian

unread,
Jan 17, 2018, 6:00:04 PM1/17/18
to
On Thu 18 Jan 2018 at 10:15:45 +1300, Joel Wirāmu Pauling wrote:

> **cough** $convert
>
> imagemagick
>
> $convert somefile.whatever somefile.pdf

**splutter, splutter**.

convert(1)

convert - convert between image formats

Plain text is an image format? One lives and learns.

--
Brian.

Curt

unread,
Jan 18, 2018, 4:50:05 AM1/18/18
to
I ran into the same error (fixed in later versions, apparently, though I
am in the dark as to what version you are using and how late is later).

I read an explicit

convert text:mytext.txt mytext.pdf

works.

But it doesn't work here.

Also: all roads lead to Rome, but some may be more suitable for wagons than others.


--
“True terror is to wake up one morning and discover that your high school class
is running the country.” – Kurt Vonnegut

Brian

unread,
Jan 18, 2018, 7:50:06 AM1/18/18
to
On Thu 18 Jan 2018 at 09:44:51 +0000, Curt wrote:

> On 2018-01-17, Chris Ramsden <chris....@gmail.com> wrote:
> > On 17/01/18 21:42, Ben Caradoc-Davies wrote:
> >> On 18/01/18 10:37, Joel Wirāmu Pauling wrote:
> >>> Works fine for txt, although as it rasterizes things it's not going to be
> >>> optimized for size.
> >>
> >> Yes, typically, but for large fonts and low resolution outputs with few pages, rasterised pages may be smaller.
> >>
> >> Kind regards,
> >>
> > If I feed it a text file, it gives me an error:
> >
> > convert: improper image header `self_spam.txt' @ error/txt.c/ReadTXTImage/439.
> >
> > What's the trick to making it work with a text file as input?
>
> I ran into the same error (fixed in later versions, apparently, though I
> am in the dark as to what version you are using and how late is later).
>
> I read an explicit
>
> convert text:mytext.txt mytext.pdf
>
> works.
>
> But it doesn't work here.
>
> Also: all roads lead to Rome, but some may be more suitable for wagons than others.

As is obvious, I hadn't realised imagemagick converted text to pdf.
However, the command you give works for me on stable and unstable.
It gives unsearchable PDFs.

--
Brian

David Wright

unread,
Jan 18, 2018, 11:00:07 AM1/18/18
to
It serves more as a reminder of convert's way of specifying the
contents of the file than much else. Adding -font unicode also dredges
up the awful-looking ttf font. But given that the PDF produced is so
blurry, that font is probably no worse looking than any other.

The PDF is unsearchable because it's really just an image wrapped in
a PDF container (like the output from some scanners). Being an image,
there's no concept of wrapping long lines either.

I can't really be bothered to figure out what's missing on those
systems of mine that say:

$ convert -font unifont text:/etc/default/grub /tmp/grubby.pdf
convert: not authorized `/etc/default/grub' @ error/constitute.c/ReadImage/412.
convert: no images defined `/tmp/grubby.pdf' @ error/convert.c/ConvertImageCommand/3210.
$

Cheers,
David.

Brian

unread,
Jan 18, 2018, 2:20:04 PM1/18/18
to
Two strikes.

> I can't really be bothered to figure out what's missing on those
> systems of mine that say:
>
> $ convert -font unifont text:/etc/default/grub /tmp/grubby.pdf
> convert: not authorized `/etc/default/grub' @ error/constitute.c/ReadImage/412.
> convert: no images defined `/tmp/grubby.pdf' @ error/convert.c/ConvertImageCommand/3210.

I used FreeMono, with the full path to the font. Looks resonable until
it is blown up and the bitmapped nature of the PDF shows up clearly,
Strike three.

I can accept that imagemagick is excellent for processing images (not
that I have done much of that with it) but, for producing a quality PDF
from text, it has been crossed off my list.

--
Brian.
0 new messages