VOTE - PoS or Parsing

0 views
Skip to first unread message

Scott Frye

unread,
Aug 10, 2009, 11:02:13 PM8/10/09
to Natural Language Processing Virtual Reading Group
For this month's selected papers I've included three PoS papers (some
from last months list that we didn't select) in case people are
interested in exploring the part of speech topic more from different
angles.

I've also selected three papers in the "parsing" arena that I thought
looked interesting and representative of statistical parsing.

Voting will go on for 2 weeks (Until Aug 24th) and then we will select
the most voted for paper to read.

Papers to consider for the month.

Part of Speech
1. TnT - A statistical Part of Speech Tagger - Thorsten Brants -
http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf
Cites: 780+
Year: 2000
Desc: Uses Markov models for predicting POS tags

2. A practical Part-of-speech tagger - Cutting, et. al
http://eprints.kfupm.edu.sa/20079/1/20079.pdf
Cites: 580+
Year: 1992
Desc: An earlier paper on using Markov models.

3. A fully bayesian approach to Unsupervised Part of Speech Tagging
http://acl.ldc.upenn.edu/P/P07/P07-1094.pdf
Cites: 41
Year: 2007:
Description: a very recent paper on Part of Speech tagging that
explores
an unsupervised approach.

Parsing
4. Three Generative, Lexicalised Models for Statistical Parsing -
Collins
http://acl.ldc.upenn.edu/P/P97/P97-1003.pdf
Cites: 103
Year: 1997
Descripton: A statistical grammer for parsing

5. A stochastic parts program and noun phrase parser for unrestricted
text - Church
http://acl.ldc.upenn.edu/A/A88/A88-1019.pdf
Cites: 1109
Year: 1988
Descripton: An early paper about staticstical parsing.

6. A Maximum Entropy Inspired Parser - Charniak
http://acl.ldc.upenn.edu/A/A00/A00-2018.pdf
Cites: 1084
Year: 2000
Descripton: A more recent parser that utilized Maximum Entropy




Ted Dunning

unread,
Aug 11, 2009, 11:52:45 AM8/11/09
to Natural Language Processing Virtual Reading Group

I would prefer to cover more recent history. These papers are all
ancient. Also, this fixation on PoS tagging is not particularly
helpful. Let's move on to modern parsing and then beyond that.

I would counter-suggest David Magerman's parsing paper for historical
background of "parsing as decision making" (if you want a moldy paper)
and Ronan Collobert's paper for a more recent take on the problem.

Statistical Decision-Tree Models for Parsing. David M. Magerman, in
Proceedings, ACL Conference, June, 1995.
http://xxx.lanl.gov/ps/cmp-lg/9504030

Parsing as Statistical Pattern Recognition. David M. Magerman. IBM
Technical Report No. 19443. December 1993.
http://www-cs-students.stanford.edu/~magerman/papers/spatter.ps

A Unified Architecture for Natural Language Processing
http://ronan.collobert.com/pub/2008_nlp_icml.html
http://videolectures.net/ronan_collobert/

On Aug 10, 8:02 pm, Scott Frye <scottf3...@aol.com> wrote:
> For this month's selected papers I've included three PoS papers (some
> from last months list that we didn't select) in case people are
> interested in exploring the part of speech topic more from different
> angles.
>
> I've also selected three papers in the "parsing" arena that I thought
> looked interesting and representative of statistical parsing.
>
> Voting will go on for 2 weeks (Until Aug 24th) and then we will select
> the most voted for paper to read.
>
> Papers to consider for the month.
>
> Part of Speech
> 1.  TnT - A statistical Part of Speech Tagger - Thorsten Brants -http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf
>         Cites: 780+
>                 Year: 2000
>         Desc:  Uses Markov models for predicting POS tags
>
> 2. A practical Part-of-speech tagger - Cutting, et. alhttp://eprints.kfupm.edu.sa/20079/1/20079.pdf
>         Cites: 580+
>                 Year: 1992
>         Desc: An earlier paper on using Markov models.
>
> 3. A fully bayesian approach to Unsupervised Part of Speech Tagginghttp://acl.ldc.upenn.edu/P/P07/P07-1094.pdf
>                 Cites: 41
>                 Year: 2007:
>                 Description: a very recent paper on Part of Speech tagging that
> explores
>                 an unsupervised approach.
>
> Parsing
> 4. Three Generative, Lexicalised Models for Statistical Parsing -
> Collinshttp://acl.ldc.upenn.edu/P/P97/P97-1003.pdf
>         Cites: 103
>         Year: 1997
>         Descripton: A statistical grammer for parsing
>
> 5.  A stochastic parts program and noun phrase parser for unrestricted
> text -  Churchhttp://acl.ldc.upenn.edu/A/A88/A88-1019.pdf
>         Cites: 1109
>         Year: 1988
>         Descripton: An early paper about staticstical parsing.
>
> 6. A Maximum Entropy Inspired Parser - Charniakhttp://acl.ldc.upenn.edu/A/A00/A00-2018.pdf

Jason Adams

unread,
Aug 11, 2009, 12:00:59 PM8/11/09
to Natural Language Processing Virtual Reading Group
I agree re: looking at more recent history and dumping POS tagging.  If you want to build a class around the topic, I think we'd look at some of these older papers, but since we're a discussion group, we can always ask questions when papers that have built on previous ideas don't make sense.  So, skipping to the stuff that's more relevant would be better in my opinion.

My vote is for Collobert & Weston (2008) "A Unified Architecture for Natural Language Processing."

-- Jason

Grant Ingersoll

unread,
Aug 11, 2009, 12:11:11 PM8/11/09
to Natural Language Processing Virtual Reading Group

On Aug 11, 2009, at 11:52 AM, Ted Dunning wrote:

>
>
> I would prefer to cover more recent history. These papers are all
> ancient.

If you go back in the archives, you will find the motivation for
this. We initially felt that many people were new to NLP and thus
missed some of the "foundation", so the thought was we would try to
fill that in the first few months so that the majority of us could be
on the "same page" in recognizing how the field has arrived at it's
current state by looking at some of the more important papers in the
field in the past.

I am fine either way.

Scott Frye

unread,
Aug 11, 2009, 8:56:27 PM8/11/09
to Natural Language Processing Virtual Reading Group
Thanks for the alternate papers Ted!

As Grant said, the initial reponse from the group was to start with
older, foundation oriented papers. However, if that's changed I am
open to reading the newer papers as well.

Let's see how everyone votes and go with what people are most
interested in!


On Aug 11, 11:52 am, Ted Dunning <ted.dunn...@gmail.com> wrote:
> I would prefer to cover more recent history.  These papers are all
> ancient.  Also, this fixation on PoS tagging is not particularly
> helpful.  Let's move on to modern parsing and then beyond that.
>
> I would counter-suggest David Magerman's parsing paper for historical
> background of "parsing as decision making" (if you want a moldy paper)
> and Ronan Collobert's paper for a more recent take on the problem.
>
> Statistical Decision-Tree Models for Parsing. David M. Magerman, in
> Proceedings, ACL Conference, June, 1995.http://xxx.lanl.gov/ps/cmp-lg/9504030
>
> Parsing as Statistical Pattern Recognition. David M. Magerman. IBM
> Technical Report No. 19443. December 1993.http://www-cs-students.stanford.edu/~magerman/papers/spatter.ps
>
> A Unified Architecture for Natural Language Processinghttp://ronan.collobert.com/pub/2008_nlp_icml.htmlhttp://videolectures.net/ronan_collobert/
> >         Descripton: A more recent parser that utilized Maximum Entropy- Hide quoted text -
>
> - Show quoted text -

Elias Ponvert

unread,
Aug 12, 2009, 12:00:32 AM8/12/09
to Scott Frye, Natural Language Processing Virtual Reading Group
Hello NLPVRG,

I sat out of the discussion of the last paper but enjoyed following the discussion :-)

For what it's worth, not all of Scott's paper suggestions are exactly "ancient". I'm thinking of this one, in particular:

A fully bayesian approach to Unsupervised Part of Speech Tagging
http://acl.ldc.upenn.edu/P/P07/P07-1094.pdf 
Cites: 41
Year: 2007:
Description: a very recent paper on Part of Speech tagging that explores an unsupervised approach. 

I guess it's a couple years old, but it's approach -- unsupervised induction of language structure using Bayesian statistics -- is reasonably contemporary and an active area of research. (I'm biased: I'm getting into this research space myself.) 

This one would be my vote, perhaps coupled with one or another classic paper in unsupervised POS tagging. Along those lines, I strongly suggest 

Banko & Moore (2004) "Part of Speech Tagging in Context" COLING.

Ted Dunning

unread,
Aug 12, 2009, 5:14:35 PM8/12/09
to Natural Language Processing Virtual Reading Group

You are correct. I overlooked the date on this paper. This is
probably evidence of my opinion that PoS tagging is essentially a
defunct topic.

That said, the entire topic of POS tagging is itself pretty much
ancient history. It was at one time seen as a critical first step
toward language parsing because hand-written grammars up to that time
were generally written in terms of parts of speech. Since then,
however, it has become very clear that that heavily lexicalized
grammars are required for useful parsing and that there is no one true
part of speech assignment, nor is deterministic assignment of POS tags
even a good idea.

Dwelling on a peripheral task like this longer than necessary really
skews the appreciation that readers are likely to get of what
foundations are all that important. Collobert and Weston makes it
pretty clear that PoS tagging is better viewed as a side-effect of
parsing rather than a foundation for parsing.

On Aug 11, 9:00 pm, Elias Ponvert <ponv...@gmail.com> wrote:
> ...
> For what it's worth, not all of Scott's paper suggestions are exactly
> "ancient". I'm thinking of this one, in particular:
>
> A fully bayesian approach to Unsupervised Part of Speech Tagging
> ...
> Year: 2007:
>
> Description: a very recent paper on Part of Speech tagging that explores an
> unsupervised approach.
> ...

Grant Ingersoll

unread,
Aug 12, 2009, 5:25:07 PM8/12/09
to Natural Language Processing Virtual Reading Group

On Aug 11, 2009, at 11:52 AM, Ted Dunning wrote:
>
> A Unified Architecture for Natural Language Processing
> http://ronan.collobert.com/pub/2008_nlp_icml.html
> http://videolectures.net/ronan_collobert/
>

+1 or the Charniak paper, but we've done one on Max Ent already, so a
different approach sounds interesting.

Paul Kalmar

unread,
Aug 12, 2009, 8:36:17 PM8/12/09
to Scott Frye, Natural Language Processing Virtual Reading Group
I vote for the Charniak paper, as it will be interesting to see how Max Ent is used on a different problem.

--Paul

Elmer Garduno

unread,
Aug 13, 2009, 4:42:24 PM8/13/09
to Natural Language Processing Virtual Reading Group
+1 A Maximum Entropy Inspired Parser - Charniak

Tom Morton

unread,
Aug 14, 2009, 8:18:05 AM8/14/09
to Natural Language Processing Virtual Reading Group
Hi,
The Charniak paper is a decent parsing paper but despite its title
it doesn't use maxent. It takes some inspiration from maxent's
ability to use over-lapping features in some of its estimation of
probabilities. Otherwise it's a generative approach to parsing
compared to a discriminative approach used by maxent or other maxent-
like classifiers.

Hope this helps...Tom

On Aug 13, 4:42 pm, Elmer Garduno <gard...@gmail.com> wrote:
> +1 A Maximum Entropy Inspired Parser - Charniak
>
>
>
> On Wed, Aug 12, 2009 at 7:36 PM, Paul Kalmar<pkal...@gmail.com> wrote:
> > I vote for the Charniak paper, as it will be interesting to see how Max Ent
> > is used on a different problem.
>
> > --Paul
>

Grant Ingersoll

unread,
Aug 18, 2009, 9:42:51 AM8/18/09
to Natural Language Processing Virtual Reading Group
Did we decide? I didn't sense a clear mandate, so maybe the EOTM
should just pick.

Scott Frye

unread,
Aug 18, 2009, 4:49:24 PM8/18/09
to Natural Language Processing Virtual Reading Group
I'm going to wait until the 24th to decide. That gives us 2 weeks of
voting as we did last time.

Also I'm on vacation running around the South and Mid west :)

Grant Ingersoll

unread,
Aug 18, 2009, 9:37:15 PM8/18/09
to Scott Frye, Natural Language Processing Virtual Reading Group
I thought we did 3-4 days of voting, then 2 weeks of reading. Time
flies, though, so I may be forgetting already.

Jason Adams

unread,
Aug 18, 2009, 9:39:58 PM8/18/09
to Grant Ingersoll, Scott Frye, Natural Language Processing Virtual Reading Group
I vote for decreasing the voting time and increasing the reading time. :)  And I'll switch my vote to the Charniak paper.

-- Jason

Paul Kalmar

unread,
Aug 19, 2009, 1:19:38 AM8/19/09
to Jason Adams, Grant Ingersoll, Scott Frye, Natural Language Processing Virtual Reading Group
I second that.  3 days voting followed by a decision from EOTM (influenced, but not necessarily directed by votes).

--Paul

Ronald Hobbs

unread,
Aug 24, 2009, 7:02:46 AM8/24/09
to Natural Language Processing Virtual Reading Group
+1 to Charniak, if we're still voting.

the Collobert paper looks good too, could we have it in the next
round? or is there some kind of natural progressions we're trying to
follow (tagging-chunking-extraction, etc) in which I'm guessing
unified architecture fits in towards the end to pull it all together?

On Aug 19, 6:19 am, Paul Kalmar <pkal...@gmail.com> wrote:
> I second that.  3 days voting followed by a decision from EOTM (influenced,
> but not necessarily directed by votes).
>
> --Paul
>

Scott Frye

unread,
Aug 24, 2009, 1:20:20 PM8/24/09
to Natural Language Processing Virtual Reading Group
This paper received 5 votes so it is the chosen paper:

6. A Maximum Entropy Inspired Parser - Charniak
http://acl.ldc.upenn.edu/A/A00/A00-2018.pdf
Cites: 1084
Year: 2000
Descripton: A more recent parser that utilized Maximum
Entropy



On Aug 10, 11:02 pm, Scott Frye <scottf3...@aol.com> wrote:
> For this month's selected papers I've included three PoS papers (some
> from last months list that we didn't select) in case people are
> interested in exploring the part of speech topic more from different
> angles.
>
> I've also selected three papers in the "parsing" arena that I thought
> looked interesting and representative of statistical parsing.
>
> Voting will go on for 2 weeks (Until Aug 24th) and then we will select
> the most voted for paper to read.
>
> Papers to consider for the month.
>
> Part of Speech
> 1.  TnT - A statistical Part of Speech Tagger - Thorsten Brants -http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf
>         Cites: 780+
>                 Year: 2000
>         Desc:  Uses Markov models for predicting POS tags
>
> 2. A practical Part-of-speech tagger - Cutting, et. alhttp://eprints.kfupm.edu.sa/20079/1/20079.pdf
>         Cites: 580+
>                 Year: 1992
>         Desc: An earlier paper on using Markov models.
>
> 3. A fully bayesian approach to Unsupervised Part of Speech Tagginghttp://acl.ldc.upenn.edu/P/P07/P07-1094.pdf
>                 Cites: 41
>                 Year: 2007:
>                 Description: a very recent paper on Part of Speech tagging that
> explores
>                 an unsupervised approach.
>
> Parsing
> 4. Three Generative, Lexicalised Models for Statistical Parsing -
> Collinshttp://acl.ldc.upenn.edu/P/P97/P97-1003.pdf
>         Cites: 103
>         Year: 1997
>         Descripton: A statistical grammer for parsing
>
> 5.  A stochastic parts program and noun phrase parser for unrestricted
> text -  Churchhttp://acl.ldc.upenn.edu/A/A88/A88-1019.pdf
>         Cites: 1109
>         Year: 1988
>         Descripton: An early paper about staticstical parsing.
>
> 6. A Maximum Entropy Inspired Parser - Charniakhttp://acl.ldc.upenn.edu/A/A00/A00-2018.pdf

Scott Frye

unread,
Sep 9, 2009, 11:27:54 AM9/9/09
to Natural Language Processing Virtual Reading Group

I have to admit that I think this paper was a lot harder to understand
than the last one. I was lost on a lot of the details.

Here are some questions to stimulate discussion:
-What is the difference between a parser and a POS tagger?
-The authors say this parser is based on a probabalistic generative
model. What is this exactly?
-What is the difference between a tag, a label and a head?
-How is the probability of the expansion generated? (this section
lost me) The paper says that "deterministic rules" pick a head of a
constituent based on the heads of its children. What would these
rules look like.
- The author refers to three parsing techniques:
Charniak(2000) - maximum entropy "inspired"
Collins(1999) - head driven statistical models
Ratnaparkhi(1999) - maximum entropy models
Are there others worth considering?


-Scott Frye

Tanton Gibbs

unread,
Sep 10, 2009, 1:54:11 PM9/10/09
to Natural Language Processing Virtual Reading Group
I agree, this was incredibly hard to parse (no pun intended) for me as
well. I couldn't even determine the major contribution of the paper.
So, having less discussion this month may be a reflection of the
difficulty of the paper (or the way in which it was written).

The only thing I wonder is if we don't have enough background to be at
this level yet or if the paper is just difficult or poorly written.

Ted Dunning

unread,
Sep 10, 2009, 4:47:57 PM9/10/09
to Natural Language Processing Virtual Reading Group


On Sep 9, 8:27 am, Scott Frye <scottf3...@aol.com> wrote:
> ... this paper was a lot harder to understand
> than the last one.  I was lost on a lot of the details.

Don't feel bad. I don't think that it was necessarily well expressed,
especially at the beginning.

Hopefully, I can clear up some of these issues.

> Here are some questions to stimulate discussion:
> -What is the difference between a parser and a POS tagger?

A parser constructs a grammatical structure for a sentence. A part of
speech (POS) tagger just labels individual words with what grammatical
role they are playing in a very general sense without reference to the
structure of the sentence or phrase.

For example, in "the red horse cart", a part of speech tagger would
tell us that "red" is serving as an adjective and that "horse" and
"cart" are nouns. Most POS taggers would not tell us that horse-cart
is a noun compound, nor that red probably applies to the cart (or
perhaps to the horse-cart compound), not the horse.

A parser, on the other hand would give us structure such as
[NounPhrase [Article "the"] [AdjectivalPhrase "red" [NounCompound
[Noun "horse"] [Noun "cart"]]]. Don't hold me to any high standards
on the actual labels that I use here; real grammarians tend to use
much more fine-grained distinctions. My labels here are just made up
to help explain the concepts.

> -The authors say this parser is based on a probabalistic generative
> model.  What is this exactly?

A generative model is one that specifies a complete joint conditional
probability of the observable and hidden variables conditional on any
parameters of the model. Typically, it is phrased in terms of sub-
models that describe a simpler (more restricted) conditional
structure. This is very helpful in many inference problems where we
want to know what the hidden variables are. A non-generative model
would just tell us the probability of the observations without telling
us about the hidden variables. If there are no hidden variables, then
the two are equivalent.

To be more explicit, the generative model used in Latent Dirichlet
Allocation is very nice and clear. It supposes that a creator of text
has a small number of topics on which they can speak. When creating a
document, the text creator has in mind a distribution of topics for
the document. They then pick a document length. Then they generate
words, one at a time until the document has the required length. To
generate a word, they pick a topic from the document topic
distribution. Each topic has a distribution of words. The text
creator then uses the topic of the word to pick a specific word from
the topic's word distribution.

In this model, the parameters are:

a) the distribution of document topic distributions

b) the word probabilities for each topic

c) the parameters of the length model

The hidden variables are:

a) the topic distribution for each document

b) the specific topic for each word

The observed variables are:

a) the length of each document

b) the words in the document.

> -What is the difference between a tag, a label and a head?

A tag is usually a part of speech tag. A label is usually a label on
a parsed structure. A head is specifically the label on the root of
some sub-tree in a parsed structure.

> -How is the probability of the expansion generated?  (this section
> lost me)

That is the crux of the paper. I can't give you details on this yet
(haven't read hard enough), but schematically speaking, the generative
model gives a probability of observed and hidden variables given
parameters p(observed, hidden | parameters). For training, we use the
observed variables, make up hidden variables and eventually get a
single value or distribution of values for the parameters.

For parsing, we have the observed variables and the parameters (from
training) and have to search for hidden variable values or
distributions of values that make us happy (have high probability).
That search is the trick that makes statistical parsing with a
generative model work. This search is usually done using a trimmed
best-first sort of algorithm called beam search.

> - The author refers to three parsing techniques:
>    Charniak(2000) - maximum entropy "inspired"
>    Collins(1999) - head driven statistical models
>    Ratnaparkhi(1999) - maximum entropy models
> Are there others worth considering?

Yes. Collobert's approach is worth looking at. So are more general
Bayesian techniques (for which I don't have a reference handy).

These different approaches differ in how they come by the probability
distribution (or general quality estimate in the case of Collobert).
This difference is mostly in terms of the dependency structure, how
the parsing structure is described and thus how the search proceeds.
There are also some differences in terms of how the parameters are
estimated.

A much larger difference that is not often discussed is whether one
uses a maximum-likelihood approach for parameter estimation as opposed
to estimating a distribution of parameters. Likewise, when parsing,
do we get a single parse structure or a distribution of parse
structures. Distributional techniques naively applied result in 5 or
more orders of magnitude increase in computational cost, but there are
clever approaches which cost vastly less.


Ted Dunning

unread,
Sep 10, 2009, 4:49:48 PM9/10/09
to Natural Language Processing Virtual Reading Group
Could be either or both.

But reading papers just out of reach is a great way to motivate
foundational learning (if you don't just get depressed about yourself
or over-impressed by how smart everybody else seems to be).

Just dig in, ask questions, try some toy examples.

Scott Frye

unread,
Sep 11, 2009, 2:06:13 PM9/11/09
to Natural Language Processing Virtual Reading Group
I agree with Ted that the issue is this wasn't very well written. It
DOES assume a lot of background knowledge and I don't think it does a
good job of referencing other papers to show you this assumed
knowledge.

I was also confused but the undefined words like "label" and "tag".
Ted cleared this up in his following post but while reading the paper
I was constantly confused as to which was the part of speech and which
was the parsed non-terminal. Better names for these two things, and a
definition of them could have helped.

As far as the main contribution, I THINK the contribution is that he
generated the grammar used by the parser from the treebank using this
"Markov" approach that he talks about in Section 2. He then uses this
and other features (in the history) to parse the target text in a
"Maximum Entropy" inspired way.

It seems his previous paper created a read a "grammar directly from
the treebank grammar". I am not sure what he means by that but it
appears that this new grammar allows for parses that were NOT seen in
the treebank just as the POS tagger allowed tagging of words that were
not seen in the training data.

Of course he then confuses us in the last section by telling us that
the REAL contribution of the paper is the order they selected the
various variables. Specifically, guessing the head before the
expansion added a 2% performance increase.

- Scott Frye

On Sep 10, 1:54 pm, Tanton Gibbs <tanton.gi...@gmail.com> wrote:
> I agree, this was incredibly hard to parse (no pun intended) for me as
> well.  I couldn't even determine the major contribution of the paper.
> So, having less discussion this month may be a reflection of the
> difficulty of the paper (or the way in which it was written).
>
> The only thing I wonder is if we don't have enough background to be at
> this level yet or if the paper is just difficult or poorly written.
>
>

Scott Frye

unread,
Sep 11, 2009, 3:10:03 PM9/11/09
to Natural Language Processing Virtual Reading Group
Ted,

Thanks for the clarifications. They were VERY helpful. I'm really
interested in Collobert's approach that you mention and the link to
the paper you provided earlier. I started reading it last night but
didn't get too far before I nodded off. Hopefully I'll get to it over
the weekend.

I did read far enough to see that there is a lot of reliance on
supervised learning which disappointed me. For such a modern paper, I
expected more unsupervised stuff.

Thanks!

Isabel Drost

unread,
Sep 11, 2009, 3:37:04 PM9/11/09
to NLP-r...@googlegroups.com
On Thursday 10 September 2009 22:47:57 Ted Dunning wrote:
> > - The author refers to three parsing techniques:
> >    Charniak(2000) - maximum entropy "inspired"
> >    Collins(1999) - head driven statistical models
> >    Ratnaparkhi(1999) - maximum entropy models
> > Are there others worth considering?
>
> Yes. Collobert's approach is worth looking at. So are more general
> Bayesian techniques (for which I don't have a reference handy).

There are also quite a few publications on parsing with structured SVMs worth
looking at. I think the dissertation of Ulf Brefeld (chapter 3) might be a
good starting point:

http://edoc.hu-berlin.de/dissertationen/brefeld-ulf-2008-02-12/PDF/brefeld.pdf
(chapter 3)

Cheers,
Isabel

--
|\ _,,,---,,_ Web: <http://www.isabel-drost.de>
/,`.-'`' -. ;-;;,_
|,4- ) )-,_..;\ ( `'-'
'---''(_/--' `-'\_) (fL) IM: <xmpp://MaineC.@spaceboyz.net>

signature.asc
Reply all
Reply to author
Forward
0 new messages