The "hyper-quantum" blog post

15 views
Skip to first unread message

Jeffrey Kegler

unread,
Sep 24, 2012, 11:53:33 PM9/24/12
to Marpa Parser Mailing LIst
I realize that my attempts to explain Marpa include the occasional
misfire. I hope readers will put this down to the newness and
unfamiliarity of Marpa's approach to parsing.

If my recent "hyper-quantum" post seemed incomprehensible, the easiest
approach may be to ignore both it and the rest of this message and focus
on what is clear. I know that my posts with code examples usually get
their message across more clearly. Those take a lot more effort, but I
try to do as many of those as possible.

For those who refuse to give up, here goes: In my recent
"hyper-quantum" computing post, my object was to encourage Marpa users
to re-vision the transition from deterministic parsing to
non-deterministic parsing. I hoped to encourage programmers to see the
change in thinking that the transition required, not as a hassle, but as
an exciting challenge. To do this I mentioned that it is EXACTLY the
transition you'd go through IF we had new non-deterministic hardware.

For the record, Marpa does not do autothreading and does leverage
multiple cores in its parsing algorithm. Marpa is thread-safe in the
sense that N distinct Marpa grammars can safely be run on N cores. But
parses based on the same Marpa grammar share data, and cannot safely be
run in multiple threads, or on multiple cores.

For all classes of grammar in practical use, Marpa achieves linear-time
non-determinism. But it does this ENTIRELY in software.

Jeffrey Kegler

unread,
Sep 25, 2012, 1:35:12 AM9/25/12
to marpa-...@googlegroups.com
Correction: Change


"For the record, Marpa does not do autothreading and does leverage multiple cores in its parsing algorithm."

to

"For the record, Marpa does not do autothreading and does NOT leverage multiple cores in its parsing algorithm."

-- jeffrey

rns

unread,
Sep 25, 2012, 11:46:24 PM9/25/12
to marpa-...@googlegroups.com
My $0.05 on rephrasing:

"For the record, no autothreading or multiple cores are used in Marpa's parsing algorithm."

rns

unread,
Sep 26, 2012, 2:21:08 AM9/26/12
to marpa-...@googlegroups.com
On Tuesday, September 25, 2012 6:53:35 AM UTC+3, Jeffrey Kegler wrote:
I realize that my attempts to explain Marpa include the occasional
misfire.  I hope readers will put this down to the newness and
unfamiliarity of Marpa's approach to parsing.
What I personally find ... uhmm ... interesting is that an open source tool that promises to parse natural language in linear time has gone so far unnoticed. NLP is hard and statistical models are said to do wonders with the recent advent of big data to a point called a death of science so probably nobody believes in practical natural language parsing anymore, but still.

If my recent "hyper-quantum" post seemed incomprehensible, the easiest
approach may be to ignore both it and the rest of this message and focus
on what is clear.
Well, anybody invoking anything quantum should probably be aware of this quote. :)
 
I know that my posts with code examples usually get 
their message across more clearly.  Those take a lot more effort, but I
try to do as many of those as possible.
This is very true—I personally found your posts on developing parsers incrementally and DSLs very valuable.
 
For the record, Marpa does not do autothreading and does leverage
multiple cores in its parsing algorithm.  Marpa is thread-safe in the
sense that N distinct Marpa grammars can safely be run on N cores.  But
parses based on the same Marpa grammar share data, and cannot safely be
run in multiple threads, or on multiple cores.
If we have ruleset S and grammars G1 and G2 based on S and precompute()d separately and recognizers R1 and R2 based on G1 and G2, then can R1 and R2 be used for thread-safe parsing?
 
For all classes of grammar in practical use, Marpa achieves linear-time
non-determinism.  But it does this ENTIRELY in software.
That's plain great—somebody has to say that, I think. :)

Jeffrey Kegler

unread,
Sep 26, 2012, 8:09:03 PM9/26/12
to marpa-...@googlegroups.com
rns wrote:
> If we have ruleset S and grammars G1 and G2 based on S and
> precompute()d separately and recognizers R1 and R2 based on G1 and G2,
> then can R1 and R2 be used for thread-safe parsing?
Yes, R1 and R2 could be used in different threads safely. Recognizers
based on the same grammar CANNOT be used safely in different threads,
but for this purpose G1 and G2 are NOT the same if they were separately
created. That both were created from ruleset S is, for the purposes of
determining thread-safety, not relevant.

rns

unread,
Sep 29, 2012, 3:18:44 AM9/29/12
to marpa-...@googlegroups.com
It's clear know, thanks for explaining. 
Reply all
Reply to author
Forward
0 new messages