Bibtex Unicode

165 views
Skip to first unread message

ldd

unread,
Dec 10, 2007, 7:45:03 PM12/10/07
to zotero-dev
The bibtex filter that comes by default with the version of Zotero
available to the general public tries, when exporting, to convert
accented characters to the arcane bibtex method for supporting
accents. However, for at least a year now (and probably much longer
than that), widely available distributions of TeX have supported
bibtex files with Unicode characters.

So except for cases where people are trying to support legacy setups,
bibtex files should be exported as Unicode without attempting to
convert the accents. The current bundled filter works okay for files
with fairly common accents but it makes minced meat of the articles I
cite which have Sanskrit words in their titles. (A word like ākāśa
becomes ?k??a.) The upshot is that it makes Zotero useless for me.

I've taken the default bibtex filter and modified it to output utf8
encoded files without translation of accents. I've called it "Bibtex
Unicode". I think such functionality could be easily incorporated in
the next version of Zotero. What can I do to help the process along?

Richard Karnesky

unread,
Dec 11, 2007, 10:40:39 AM12/11/07
to zotero-dev
> The bibtex filter that comes by default with the version of Zotero
> available to the general public tries, when exporting, to convert
> accented characters to the arcane bibtex method for supporting
> accents. However, for at least a year now (and probably much longer
> than that), widely available distributions of TeX have supported
> bibtex files with Unicode characters.

SOME versions of bibtex are able to support SOME UTF-8, but MOST do
not support all of it (particularly multi-byte characters). See a
discussion in comp.text.tex:

<http://groups.google.com/group/comp.text.tex/msg/8aefd925c735c842?
dmode=source>


> So except for cases where people are trying to support legacy setups,
> bibtex files should be exported as Unicode without attempting to
> convert the accents.

Due to the partial support, I think the best thing to do would be to
improve the UTF-8 -> TeX markup converter where it is lacking & to
also give the user the choice of saving to latin-1 through this
converter or to use UTF-8.

UTF-8 should NOT be the only option for export.

ldd

unread,
Dec 11, 2007, 1:17:04 PM12/11/07
to zotero-dev
On Dec 11, 10:40 am, Richard Karnesky <karne...@gmail.com> wrote:

> UTF-8 should NOT be the only option for export.

I agree. That's why I named my filter "Bibtex Unicode" instead of
just modifying the old one. I guess the question is why I did not
modify the old filter to support multiple options. The answer is
simple: I don't know Zotero's architecture.

Bruce D'Arcus

unread,
Dec 11, 2007, 1:30:11 PM12/11/07
to zoter...@googlegroups.com

Now this (a developer willing to submit patches who effectively cannot)
is not good. What's the solution to this problem?

Bruce

Louis-Dominique Dubeau

unread,
Dec 11, 2007, 1:56:34 PM12/11/07
to zoter...@googlegroups.com
Here's a patch against the code of the stock Bibtex filter as bundled
with the Zotero plugin published on the Zotero web site (1.0.1).
Yesterday I was ready to take time to package it into something nice if
someone would have pointed me in the right direction (I did ask what I
could do to help things along) but I've been too busy answering
criticism in the forum. My time being a limited resource, this is the
most I'm able to do for now.

For the record, I'm not pretending that it is something that should be
integrated "as is" into Zotero. It should certainly not replace the
current Bibtex filter for one thing and my solution of creating an
entirely separate filter is not elegant. Quite likely there is a better
way to support this.

diff

Dan Stillman

unread,
Dec 11, 2007, 3:49:01 PM12/11/07
to zoter...@googlegroups.com
On 12/11/07 1:56 PM, Louis-Dominique Dubeau wrote:
> Here's a patch against the code of the stock Bibtex filter as bundled
> with the Zotero plugin published on the Zotero web site (1.0.1).
>
> ...

>
> For the record, I'm not pretending that it is something that should be
> integrated "as is" into Zotero. It should certainly not replace the
> current Bibtex filter for one thing and my solution of creating an
> entirely separate filter is not elegant. Quite likely there is a better
> way to support this.
>

Right. We probably need two separate prefs--say,
zotero.extensions.exportUnicodeBibTeX and exportUnicodeRIS (both enabled
by default?)--and some code in translate.js to 1) override the character
set when using these formats with the pref enabled and 2) put some flag
into the sandbox so the BibTeX translator knows not to try to replace
Unicode characters in the places indicated by this patch.

If someone wants to take care of this, take the ticket
(https://www.zotero.org/trac/ticket/749) or post here.

Julian Onions

unread,
Dec 12, 2007, 12:04:36 PM12/12/07
to zoter...@googlegroups.com

The change for bibtex export/import is now uploaded - it includes a configuration variable of
extensions.zotero.export.unicodeBibTeX (true/false)
to allow native UTF8. However it also does a much better job importing/exporting regular format too - so may not be needed as much.
I'm sure there are bugs in it though, so drop a line if you find any.

Julian.


Dan Stillman

unread,
Dec 13, 2007, 12:04:25 AM12/13/07
to zoter...@googlegroups.com
On 12/12/07 12:04 PM, Julian Onions wrote:
> The change for bibtex export/import is now uploaded - it includes a
> configuration variable of
> extensions.zotero.export.unicodeBibTeX (true/false)
> to allow native UTF8. However it also does a much better job
> importing/exporting regular format too - so may not be needed as much.
> I'm sure there are bugs in it though, so drop a line if you find any.

Many thanks to Julian for the patch.

Some notes and questions:

1) What's the more reasonable default setting for outputting UTF-8, on
or off? Julian's patch has it off, and Rick recommends off in the forums.

2) I know it was in the original code, but rather than manually
replacing \u0080-\uFFFF, I think we can just add an else {
Zotero.setCharacterSet("latin1"); } if Zotero.useBibtexUTF8 is false and
it'll replace out-of-bounds characters with question marks
automatically. I'm not sure what character set it's trying to use when
UTF-8 isn't set explicitly, but as it is now an unmapped (Chinese)
character gets mangled in the output file.

3) When the setting is off, we may need to do something smarter to
multibyte characters in keys to keep them valid (since I suspect
question marks aren't valid). Or maybe even when the setting is on--do
UTF-8-aware implementations handle keys with Unicode characters?

4) The useBibtexUTF8 setting doesn't seem to have an effect on Quick
Copy, which just uses Unicode regardless. Quick Copy for BibTeX is
handled by fileInterface.js::exportItemsToClipboard() and may not
trigger the same stream code in translate.js that allows
setCharacterSet() to work. Simon should be able to fix this if it's not
clear.

Louis-Dominique Dubeau

unread,
Dec 13, 2007, 8:33:45 AM12/13/07
to zoter...@googlegroups.com
On Thu, 2007-12-13 at 00:04 -0500, Dan Stillman wrote:
> Many thanks to Julian for the patch.

Indeed. Thanks Julian.

> Some notes and questions:
>
> 1) What's the more reasonable default setting for outputting UTF-8, on
> or off? Julian's patch has it off, and Rick recommends off in the forums.

I recommend it on. I submit that distributions of TeX which can support
UTF8 are in the majority even if users don't realize it. I myself made
that discovery by chance about a year ago. And I'm not running some
strange setup either: the OS is Ubuntu 7.10 and the TeX distribution is
TeXLive 2007 which is produced by the TeX User Group (not marginal by
any means). When I made the discovery I was running Debian sid and the
TeX distribution was a TeXLive release prior to 2007 but I don't
remember what precise release it was.

Moreover, if the exported file is UTF8, it can easily be processed by
other tools (grep, python/Perl scripts, etc.) and I can be trivially
indexed, etc.

> 2) I know it was in the original code, but rather than manually
> replacing \u0080-\uFFFF, I think we can just add an else {
> Zotero.setCharacterSet("latin1"); } if Zotero.useBibtexUTF8 is false and
> it'll replace out-of-bounds characters with question marks
> automatically. I'm not sure what character set it's trying to use when
> UTF-8 isn't set explicitly, but as it is now an unmapped (Chinese)
> character gets mangled in the output file.

Here's a related thought: I'd like Zotero to warn the user that some
characters are not representable in the output format. If programatic
facilities are lacking to do this in a semi-automatic way, a test is
required for the presence of characters in the \u0080-\uFFFF range which
cannot be converted to BibTeX's dialect.

> 3) When the setting is off, we may need to do something smarter to
> multibyte characters in keys to keep them valid (since I suspect
> question marks aren't valid). Or maybe even when the setting is on--do
> UTF-8-aware implementations handle keys with Unicode characters?

To my knowledge, keys are not supposed to contain anything else than a
subset of the characters representable in ASCII (I don't know the
precise subset), no matter which TeX distribution is used. I've done a
quick search and found nothing to confirm or infirm my hypothesis.
Emacs' bibtex.el does operate on the assumption that a key must be
representable in ASCII but I did not find a reference to a reliable
source in that file.

I have nothing to say on point 4.

Thanks,
Louis

Richard Karnesky

unread,
Dec 13, 2007, 11:22:13 AM12/13/07
to zotero-dev
> > 1) What's the more reasonable default setting for outputting UTF-8, on
> > or off? Julian's patch has it off, and Rick recommends off in the forums.

My real recommendation is to watch JabRef or the other popular
reference managers for bibtex & follow their lead. When/if UTF-8
becomes the default for them, it can become the default for Zotero.


> I recommend it on. I submit that distributions of TeX which can support
> UTF8 are in the majority even if users don't realize it. I myself made
> that discovery by chance about a year ago. And I'm not running some
> strange setup either: the OS is Ubuntu 7.10 and the TeX distribution is
> TeXLive 2007 which is produced by the TeX User Group (not marginal by
> any means).

I don't think this single datapoint is enough to base the decision
on. TeXLive is available for all platforms, however it is not the
dominant TeX distribution (nor representative of what is deployed).
Further: those that use older versions of TeXLive would not have the
same experience you are having. I believe you are finding it easy to
use UTF-8 because, beginning with version 2007, TeXLive shipped with
XeTeX (which handles native unicode (although still not perfectly)).
XeTeX still isn't in many of the commercial distributions of TeX (PC
TeX, Scientific WorkPlace, etc.) & isn't even yet in the most popular
free windows distribution (MikTeX). It will slowly start to trickle
into these other distributions (it will be in MikTeX 2.7), but I still
think being conservative is the way to go: many people do not use the
latest version of their TeX suites & some don't have the choice to (as
they run TeX on a shared machine).

Also note that arXiv & other popular/online/public TeX users are
confined to ISO Latin 1.


> > Or maybe even when the setting is on--do
> > UTF-8-aware implementations handle keys with Unicode characters?

There are several ways to get varying degrees of support for UTF-8
into LaTeX. There are also several citation packages (some of which
can't handle multi-byte characters & some that can). The answer to
your question really depends on the specific combo a user is running
as to what will work.

I don't know what other reference managers do to citekeys in UTF-8
export. My hunch is that they leave them as UTF-8, but it would be
worth checking.

--Rick

Louis-Dominique Dubeau

unread,
Dec 14, 2007, 8:29:02 AM12/14/07
to zoter...@googlegroups.com
[Richard, you cut of the attribution lines when you replied. Makes it
hard to follow who said what.]

On Thu, 2007-12-13 at 08:22 -0800, Richard Karnesky wrote:
> > I recommend it on. I submit that distributions of TeX which can support
> > UTF8 are in the majority even if users don't realize it. I myself made
> > that discovery by chance about a year ago. And I'm not running some
> > strange setup either: the OS is Ubuntu 7.10 and the TeX distribution is
> > TeXLive 2007 which is produced by the TeX User Group (not marginal by
> > any means).
>
> I don't think this single datapoint is enough to base the decision
> on. TeXLive is available for all platforms, however it is not the
> dominant TeX distribution (nor representative of what is deployed).

You counter my impression about the popularity of TeXLive with your own
impression of its popularity but were are still at the level of
impressions.

> Further: those that use older versions of TeXLive would not have the
> same experience you are having.

True but the question is not whether there exist such people but how
representative they are of the people who would want to use Zotero.

> I believe you are finding it easy to
> use UTF-8 because, beginning with version 2007, TeXLive shipped with
> XeTeX (which handles native unicode (although still not perfectly)).

XeTeX is included in TeXLive 2007 but I do not use XeTeX. For Ubuntu,
TexLive is divided in several smaller packages: the one which contains
XeTeX (named texlive-xetex) is not installed on my machine. And the
mere inclusion of XeTeX in the distribution does not mean that
everything else has been changed. Moreover, if you go back to my
original post, you'll see that TeXLive started supporting
Unicode-encoded BibTeX before the 2007 release.

So, no, XeTeX is not a factor.

> Also note that arXiv & other popular/online/public TeX users are
> confined to ISO Latin 1.

What on earth are "popular/online/public TeX users"?

arXiv is an example of a site that is not at all designed to respond to
the needs of the Humanities. I've never used it before because it does
not cater to my needs but I've done a few searches and found problems
(can't do searches with diacritics, articles indexed without diacritics,
etc.). If those guys pretended to cover the Humanities by any means,
they'd get some flack.

In the grand scheme of things the important thing is to have the options
of producing UTF8 or a BibTeX file in bibtex's native format. What the
default is set to is quite secondary. RefWorks only outputs BibTeX in
UTF8, which is problematic for those with old setups. Connotea is
buggy: it outputs a mixture of TeX-coded accents and UTF8 for accents it
seems unable to handle. JSTOR is buggy: if you ask for a BibTeX record,
it strips all accents.

Ciao,
Louis

Bruce D'Arcus

unread,
Dec 14, 2007, 8:36:47 AM12/14/07
to zoter...@googlegroups.com
Louis-Dominique Dubeau wrote:

...

> arXiv is an example of a site that is not at all designed to respond to
> the needs of the Humanities.

Ahem, TeX and BibTeX are "not at all designed to respond to the needs of
the Humanities" ;-). It's a system designed for mathematicians and
scientists. So I'd say arXiv is likely representative.

> In the grand scheme of things the important thing is to have the options
> of producing UTF8 or a BibTeX file in bibtex's native format. What the
> default is set to is quite secondary.

Correct.

You make a good point that the preference here should take into account
the desire of Zotero BibTeX users, rather than to worry about the entire
spectrum of BibTeX users.

I suggest people who care voice their preference, and in the absence of
a clear consensus, Dan just make an executive decision.

I'm not going to vote per se since I don't use BibTeX.

Bruce

Richard Karnesky

unread,
Dec 14, 2007, 2:20:26 PM12/14/07
to zotero-dev
On Dec 14, 5:29 am, Louis-Dominique Dubeau <l...@lddubeau.com> wrote:

> True but the question is not whether there exist such people but how
> representative they are of the people who would want to use Zotero.

That's fair.


> XeTeX is included in TeXLive 2007 but I do not use XeTeX.

So do you use something like '\usepackage[utf8]{inputenc}' (either
explicitly in your .tex file or through some other package you call,
such as CJK)? I think that this has been part of the LaTeX
distribution since ca. 2004 (and was around before that). It is
widely (not universally) available. I don't know how common it is to
explicitly set the characterset in LaTeX documents that don't need
some of those characters.

Or do you do something else to make it work? (This is only to satisfy
my own curiosity--it sounds as if you're using eTeX/pdfTeX & I think
they need to be prodded like this to accept utf-8.)


> What on earth are "popular/online/public TeX users"?

Not all LaTeX is compiled on personal desktop machines & so it is
important to look at the limitations of some of the places you may
submit zotero-produced content to.

arXiv (despite the subject matter bias) was the best example I could
come up with (as it has ca. half a million papers & strongly
encourages TeX submissions). Note that I just reviewed their site
again & arXiv only started to use pdflatex in the past few months. It
might be that they now support UTF-8.

Other publishers may have similar limitations--when I last tried, it
didn't work at Elsevier, but did work at APS.


I said:
> I don't know what other reference managers do to citekeys in UTF-8 export.

I'm not a frequent RefWorks user, but I do have a testing account. Is
it the case that the only BibTeX keys that can be assigned are the
RefWorks ID #s? If I'm missing the way to have more useful keys, how
do they handle UTF-8 in keys.

JabRef will not use a field to auto-generate a key if it has a UTF-8
character & considers manually entered keys with those characters to
be invalid.

[Note that recent versions of bibtex and latex do not consider these
to be invalid characters. Indeed, you shouldn't even need an inputenc
since it is in a control sequence--try entering it in by hand instead
of allowing bibtex-mode to do it for you.]

It is hard to say what is the right thing to do: I personally find it
more usable to transliterate to ASCII (since I type them out on a US
keyboard). However, I know that some consider this "bastardization"
to be bad & may prefer it to be blanked (as JabRef does) or kept in
UTF-8.

--Rick

Louis-Dominique Dubeau

unread,
Dec 15, 2007, 4:24:00 PM12/15/07
to zoter...@googlegroups.com

On Fri, 2007-12-14 at 11:20 -0800, Richard Karnesky wrote:
> On Dec 14, 5:29 am, Louis-Dominique Dubeau <l...@lddubeau.com> wrote:
>
> > XeTeX is included in TeXLive 2007 but I do not use XeTeX.
>
> So do you use something like '\usepackage[utf8]{inputenc}' (either
> explicitly in your .tex file or through some other package you call,
> such as CJK)? I think that this has been part of the LaTeX
> distribution since ca. 2004 (and was around before that). It is
> widely (not universally) available. I don't know how common it is to
> explicitly set the characterset in LaTeX documents that don't need
> some of those characters.

That's what I've been using since I switched back to TeX/LaTeX in the
middle of last year. And it is after that that I found that BibTeX
would accept UTF8 files. My guess is that people who don't need to use
UTF8 don't know about \usepackage[utf8]{inputenc}.

Before that I was using Omega/Lamba and Aleph/Lamed which both proved
utterly complicated to use and unstable. The straw that broke the
camel's back was the inability to include Chinese and transliterated
Sanskrit in the same paper: I could get one or the other but not both at
the same time. There's probably some incantation somewhere that could
do it but I did a few tests with LaTeX, saw that I could do it fairly
easily and happily ditched Lambda and Lamed.

> Or do you do something else to make it work? (This is only to satisfy
> my own curiosity--it sounds as if you're using eTeX/pdfTeX & I think
> they need to be prodded like this to accept utf-8.)

In the latest TeXLive, latex resolves to pdftex.

> > What on earth are "popular/online/public TeX users"?
>
> Not all LaTeX is compiled on personal desktop machines & so it is
> important to look at the limitations of some of the places you may
> submit zotero-produced content to.

I see.

>
> I said:
> > I don't know what other reference managers do to citekeys in UTF-8 export.
>
> I'm not a frequent RefWorks user, but I do have a testing account. Is
> it the case that the only BibTeX keys that can be assigned are the
> RefWorks ID #s? If I'm missing the way to have more useful keys, how
> do they handle UTF-8 in keys.

They don't handle UTF8 in keys. They only generate keys that correspond
to RefWork IDs. I've checked for a preference somewhere that would
change that but I have not found it. I could have missed something
though. So if anybody knows otherwise, please correct me. That RefWork
ID scheme is not user-friendly at all. I'm tolerating it right now
because for historical (or hysterical) reasons which are totally my own
fault I'm managing my references in a half-broken way but I think for my
next paper I will made some changes in the way I manage my
bibliographies and then the RefWork scheme will become an obstacle that
I will have to deal with somehow (unless I switch to Zotero in the
meantime).

> [Note that recent versions of bibtex and latex do not consider these
> to be invalid characters. Indeed, you shouldn't even need an inputenc
> since it is in a control sequence--try entering it in by hand instead
> of allowing bibtex-mode to do it for you.]

Good to know.

> It is hard to say what is the right thing to do: I personally find it
> more usable to transliterate to ASCII (since I type them out on a US
> keyboard). However, I know that some consider this "bastardization"
> to be bad & may prefer it to be blanked (as JabRef does) or kept in
> UTF-8.

I know there's been some discussion about key generation being
customizable to some extent. Maybe "convert key to ascii" should be
part of the options?

But this brings up other issues. Let's assume a key is generated from
the author name and title of the work. Let's also assume that those
contain Chinese characters. How do you convert to ASCII? It is
possible in theory to do it automatically but I can see two problems:

1) Incorporating the functionality into a Firefox plugin could make it
beefier than desirable. I've written a Java library to access the
Unihan database. Java library and database are 4.4Mb in a jar file (most
of this is the database).

2) Several Chinese characters (probably the majority of them) have
multiple possible pronunciations. Taking the first pronunciation listed
often works but not always. And that just assuming Mandarin but if a
scholar is referring to articles written by Cantonese scholars then
Cantonese pronunciation might be in order. (Both Mandarin and Cantonese
are in Unihan.)

I've never been able to include Chinese directly into a BibTeX file so I
don't know how urgent this kind of support is.

Ciao,
Louis

Bruce D'Arcus

unread,
Dec 16, 2007, 9:46:49 AM12/16/07
to zoter...@googlegroups.com
Louis-Dominique Dubeau wrote:

...

> I know there's been some discussion about key generation being
> customizable to some extent. Maybe "convert key to ascii" should be
> part of the options?
>
> But this brings up other issues. Let's assume a key is generated from
> the author name and title of the work. Let's also assume that those
> contain Chinese characters. How do you convert to ASCII? It is
> possible in theory to do it automatically but I can see two problems:
>
> 1) Incorporating the functionality into a Firefox plugin could make it
> beefier than desirable. I've written a Java library to access the
> Unihan database. Java library and database are 4.4Mb in a jar file (most
> of this is the database).

...

Yikes. That sounds like an awful lot of complication to in essence
support legacy technology (pre-unicode BibTeX).

Keep in mind that LuaTeX is on the near horizon (final release scheduled
for next summer IIRC), which has support for unicode and OpenType
out-of-box. It also embeds Lua, which leaves room, say, for plugging in
a more modern BibTeX replacement to the LuaTeX core (say one that uses
CSL to configure styles ;-)).

Also, I think Dan mentioned they're working on allowing user-defined
keys as an option.

That doesn't help you now, but it's just to say perhaps better to look
towards the unicode future rather than worry too much about supporting
the limitations of ASCII?

Bruce

Richard Karnesky

unread,
Dec 16, 2007, 4:29:22 PM12/16/07
to zotero-dev
> Keep in mind that LuaTeX is on the near horizon (final release scheduled
> for next summer IIRC), which has support for unicode and OpenType
> out-of-box.

As before: there are already LaTeX toolchains that support unicode.
The concern is that not all toolchains do (particularly those used by
some publishers) & I don't really see how LuaTeX will change this.

Giving the apparent complexities of transliterating CJK, it seems more
sensible to either leave out a field from the key (as JabRef does) or
to just use the UTF-8 (and link to a tool in the FAQ to strip these
for those times where the "unicode future" is impossible to achieve).

--Rick

Bruce D'Arcus

unread,
Dec 16, 2007, 6:16:59 PM12/16/07
to zoter...@googlegroups.com
Richard Karnesky wrote:
>> Keep in mind that LuaTeX is on the near horizon (final release scheduled
>> for next summer IIRC), which has support for unicode and OpenType
>> out-of-box.
>
> As before: there are already LaTeX toolchains that support unicode.
> The concern is that not all toolchains do (particularly those used by
> some publishers) & I don't really see how LuaTeX will change this.

Not overnight, but in the same way that while there are still desktop
applications and OSes that don't have real unicode support, I think it
will further contribute to the momentum that says that unicode is the
norm, and ascii and other such encodings are legacy.

E.g. at some point, time for the "some publishers" to get with the 21st
century.

Of course, the only publishers I've ever dealt with want nothing to do
with TeX, and tend to insist on .doc files (though sometimes I can get
the reasonable tech person who will make exceptions and accept, say, XHTML).

> Giving the apparent complexities of transliterating CJK, it seems more
> sensible to either leave out a field from the key (as JabRef does) or
> to just use the UTF-8 (and link to a tool in the FAQ to strip these
> for those times where the "unicode future" is impossible to achieve).

Yeah.

Bruce

Dan Stillman

unread,
Dec 17, 2007, 4:00:49 AM12/17/07
to zoter...@googlegroups.com
Thanks for the input on the various Unicode/BibTeX issues.

It looks like I distracted us unnecessarily with the whole
which-should-be-the-default question, though the info was helpful
regardless. Simon has committed an update to Julian's patch that removes
the global pref in favor of a runtime option that should retain the
last-used setting. The translator now also uses only ASCII in cite keys,
regardless of the mode. (We can revisit this later if we find support
for UTF-8 cite keys in other software, and of course customizable keys
are still planned.) Finally, BibTeX imports are now UTF-8 only, meaning
that extended characters in files encoded as ISO-8859-1 won't import
correctly unless mapped to their ASCII BibTeX representations.

Unfortunately the updated translator, with the enhanced mapping tables,
now may be too large to reliably import via Firefox 2 mozStorage, so
we're not pushing it to existing clients until Zotero 1.5 (which will
require Firefox 3). The updated translator can be downloaded from Trac
(https://www.zotero.org/trac/raw-attachment/ticket/749/bibtex-translator.sql)
and installed into the Zotero DB with the SQLite command line client
(e.g., sqlite3 zotero.sqlite < bibtex-translator.sql). Be sure to close
Firefox and make a backup of the DB before importing.

- Dan

Reply all
Reply to author
Forward
0 new messages