4 views

Skip to first unread message

Jan 4, 2001, 4:59:30 PM1/4/01

to

Hi all

I'm a PhD student whose current research focuses on nondeterministic

FSMs. It seems to me that there are interesting ways of implementing

them efficiently in hardware. I was wondering if nondeterministic FSMs

have previously been implemented in VLSI/programmable logic and if so,

for what applications.

I would be grateful for any info/pointers regarding above. Thanks in

advance.

Regards

Jan 4, 2001, 5:38:11 PM1/4/01

to

and I would be grateful if someone would simply explain what these things

are...

i.e., in what sense deterministic?

thanks,

-eric

--

Eric C. Fromm efr...@sgi.com

Principal Engineer Scalable Systems Division

SGI - Silicon Graphics, Inc. Chippewa Falls, Wi.

Jan 4, 2001, 6:47:10 PM1/4/01

to

Given a state and an input symbol, a deterministic finite automaton (DFA)

can "go to" only one next state. A nondeterministic finite state automaton

(NFA), however, may have multiple candidates for the next state.

DFAs and NFAs have equivalent expressive power, namely, that of the regular

expressions.

--

George Coulouris - http://www.tc.cornell.edu/~glc5/

Jan 4, 2001, 5:21:31 PM1/4/01

to Reetinder P. S. Sidhu

Hi,

tell us more about non-deterministic finite state machines.

What makes them non-deterministic?

Do they roll dice? ;-)

What's good about them?

tell us more about non-deterministic finite state machines.

What makes them non-deterministic?

Do they roll dice? ;-)

What's good about them?

Peter Alfke, Xilinx

==============================

"Reetinder P. S. Sidhu" wrote:

Jan 4, 2001, 6:59:43 PM1/4/01

to

George Coulouris wrote:

>

> Given a state and an input symbol, a deterministic finite automaton (DFA)

> can "go to" only one next state. A nondeterministic finite state automaton

> (NFA), however, may have multiple candidates for the next state.

>

> DFAs and NFAs have equivalent expressive power, namely, that of the regular

> expressions.

>

> --

> George Coulouris - http://www.tc.cornell.edu/~glc5/

OK, I get the 'multiple candidates for next state' bit. You lost me with

the 'expressive power' assertion though. What is this characteristic and

how do DFAs and NFAs compare to other constructs (and what are those

constructs)?

Jan 4, 2001, 7:44:21 PM1/4/01

to

In article <3A550E6F...@sgi.com>, Eric C. Fromm <efr...@sgi.com> writes:

> OK, I get the 'multiple candidates for next state' bit. You lost me with

> the 'expressive power' assertion though. What is this characteristic and

> how do DFAs and NFAs compare to other constructs (and what are those

> constructs)?

> OK, I get the 'multiple candidates for next state' bit. You lost me with

> the 'expressive power' assertion though. What is this characteristic and

> how do DFAs and NFAs compare to other constructs (and what are those

> constructs)?

If I recall my computability class from eons ago, I believe given a

NFA, you can construct an equivalent DFA. Basically, the states of

the DFA map to the sets of possible states the DFA could be in.

In other words, if from state A, event 1 could take it to either

states B or C, the DFA will have a state indicating the NFA is in

either state B or C.

-- Pete Smoot

Jan 4, 2001, 8:10:41 PM1/4/01

to

In article <3A54F76B...@xilinx.com>,

I did a search on Northern Light, and came up with some links,

including these:

http://www.cse.msu.edu/~torng/360Book/NFA/

http://www.netaxs.com/people/nerp/automata/nfa1.html

http://www.grappa.univ-lille3.fr/tata/

The first link refers to the NFA as an "unrealistic model of

computation". From a theoretical perspective, it's easy to follow how

an NFA works. I don't see how you would implement this in an FPGA,

without having a random number generator to produce "non-deterministic"

branches. So then, the NFA would be implemented as an FSA with a

random number generator output being used as an input to the state

machine. I can't imagine why anyone would want to do this for a

practical application, so I am also interested to see where the OP is

going with this.

--

Greg Neff

VP Engineering

*Microsym* Computers Inc.

gr...@guesswhichwordgoeshere.com

Sent via Deja.com

http://www.deja.com/

Jan 4, 2001, 9:21:18 PM1/4/01

to

"Reetinder P. S. Sidhu" wrote:

>

What exactly do you mean? An FSM with probablistic state transitions

governed by a pseudo random sequence generator like an LFSR or a linear

congruence modulo multiplier? Or governed by a truely random noise

source like amplified thermal background noise?

--

Paul W. DeMone The 801 experiment SPARCed an ARMs race of EPIC

Kanata, Ontario proportions to put more PRECISION and POWER into

dem...@mosaid.com architectures with MIPSed results but ALPHA's well

pde...@igs.net that ends well.

Jan 4, 2001, 9:33:59 PM1/4/01

to

> I was wondering if nondeterministic FSMs have previously been implemented

Rather too often.

Thomas Maslen

mas...@pobox.com

(Oh, you meant deliberately?)

Jan 4, 2001, 9:49:57 PM1/4/01

to

> Hi all

Humm, A "true" nondeterministic FSA, in the theoritical computer-science

sense would be novel, but I would think impossible to do "efficently." (At

least without some type of quantum trick.) If you have a way that really

does that I would be quite interested. It certainly would have significant

applications...

If you mean a FSM which randomly goes to different states based on certain

probabilities, I really have no clue if it has been done, nor do any

applications jump out at me. (But I'd not be shocked if there were many,

even in computer architecture.)

Mark

> Regards

--

~~~~~~~~~~~~~~~~ http://www.cps.msu.edu/~brehob ~~~~~~~~~~~~~~~~~~

~~~~~~Mark Brehob: Ultimate Player, Gamer, Computer Geek~~~~~~~~~~

Jan 4, 2001, 10:50:16 PM1/4/01

to

At last .... a use for metastable flipflops.

-- philip

-- philip

On 4 Jan 2001 13:59:30 -0800, si...@halcyon.usc.edu (Reetinder P. S. Sidhu)

wrote:

Philip Freidin

Fliptronics

Jan 4, 2001, 10:45:49 PM1/4/01

to

Non-deterministic? We try pretty hard to make things deterministic. :-)

Tell us more.

del cecchi

"Reetinder P. S. Sidhu" <si...@halcyon.usc.edu> wrote in message

news:932ro2$j6e$1...@halcyon.usc.edu...

Jan 4, 2001, 10:57:39 PM1/4/01

to

In article <3A552F9E...@igs.net>, Paul DeMone <pde...@igs.net> wrote:

>What exactly do you mean? An FSM with probablistic state transitions

>governed by a pseudo random sequence generator like an LFSR or a linear

>congruence modulo multiplier? Or governed by a truely random noise

>source like amplified thermal background noise?

>What exactly do you mean? An FSM with probablistic state transitions

>governed by a pseudo random sequence generator like an LFSR or a linear

>congruence modulo multiplier? Or governed by a truely random noise

>source like amplified thermal background noise?

_Compilers:_Principles,_Techniques,_and_Tools_

discusses this. What you do with an NFA is that you essentially

"take" all the transitions that are possible in a state,

at once.

Your current "state" is really the set of all the states

of the NFA (Non-deterministic Finite Automaton) that you could

be in. You take all the possible transitions out of all those

states that you could be in. I think if one of the new

states is a terminating state, you stop.

You can implement these things pretty efficiently (for

a fixed maximum number of states) by managing your current state

as a bit-vector which indicates which of the NFA states you

are currently in.

You should be able to, um, use the current bit vector as

enables on a set of functional units (one per state) each of

which consumes the current input symbol and emits a new

bit vector of possible states. Then OR the results of all

the functional units together to get your new state. And the

current state vector with a vector representing terminating

states, if the result is non-zero, stop.

Your hardware will set a Really Hard cap on the maximum

number of NFA states you support. The details of how input symbols

select transitions will determine the inner structure of the

functional units. If, for eaxmple, you are doing string matching,

there is a straightforward way to convert a regular expression into an

NFA. If you support up to, say, 64 states (which will capture regular

expressions up to something near 64 characters long, I think), each

state (functional unit) can be a 256 element memory 64 bits wide. The

input symbol is a character, and acts as the address. Address all

the little RAMs at once, and use the current input state as output

enables on the RAMs. This could go quite fast, though where you'd

find RAMs that shape I have no idea. I guess you could do this in

an FPGA and do regexp matching at 100M characters/second? Network

Intrusion Detection, anyone?

Why I don't get is what makes this interesting,

it all seems pretty trivial to me. Am I missing something?

A previous poster basically summed it up. You implement

an NFA as a DFA (Deterministic Finite Automaton) whose states are

collections of the NFA state. The point of doing it as an NFA is

that you don't have to construct the entire DFA, you essentially

"dynamically" construct that states and transitions of an equivalent

DFA as you go. The NFA is generally quite a bit smaller than equivalent

DFAs (I think one can construct examples where equivalent DFAs all have

2^n states). Furthermore, if you need to run a finite automaton on one

input only, it is more efficient to run the NFA and (inefficiently)

construct only the DFA states you NEED instead of constructing the

entire DFA.

The referenced book has a nice discussion of many of these issues.

[ by the way, I forget, or never knew, whether one NFA can have

more than one equivalent DFA. I used the plural above, because

it feels right, but there may always be one DFA ]

Jan 5, 2001, 12:25:08 AM1/5/01

to

> OK, I get the 'multiple candidates for next state' bit. You lost me with

> the 'expressive power' assertion though. What is this characteristic and

> how do DFAs and NFAs compare to other constructs (and what are those

> constructs)?

> the 'expressive power' assertion though. What is this characteristic and

> how do DFAs and NFAs compare to other constructs (and what are those

> constructs)?

Wow -- the best answer to that question is probably to take a class in

formal language theory... but here's a capsule summary.

There is a standard way of describing the complexity of a program's

inputs in terms of the features needed in the program to be able to

tell whether the program can tell whether or not an input is valid.

The set of valid inputs for a program is called a language, in analogy

to a natural language (like English) or a programming language.

You can describe this in terms of the class of language, the features

allowed in a grammar for the language, and the features required in a

machine that can recognise the language.

The hierarchy of languages is called the Chomsky hierarchy. Some of

the members are:

Variable names can be described by regular expressions, generated by

regular grammars, and recognized by finite state machines. You can

program one by using an enumerated type called a State, that keeps

track of where you are in the string you're trying to recognize. A

FSM would be able to tell if its input was a bunch of left and right

parentheses, but couldn't tell (in general) if they balanced -- if

your state variable had 100 states to count how many more left parens

you've seen that right parens, I could start my string with 101 left

parens and it would lose count. A compiler's lexical analyzer can

recognize regular expressions.

Arithmetic expressions can be generated by context free grammars and

recognized by pushdown automata (PDA) -- basically, finite state machines

with the addition of being able to push things on a stack, and pop

them off the stack. The stack can get infinitely deep (this is

theory, after all). Now we can tell if we have balanced parens:

every time you give me a left paren, I push it on the stack; every

time you give me a right paren I pop the stack. If I ever try to pop

an empty stack I've seen too many right parens; if there's anything

left on the stack at the end there were too many left parens.

Pushdown automata are interesting in that they can recognize more

languages if they can take guesses. So, if I want to recognize

palindromes with a deterministic (ie no guessing) PDA, I can only do

it if i've got a distinguished character that only appears at the

center of the palindrome -- in this case, I can push everything I see

until I hit the center, then pop everything after that. If what I pop

matches what I see all the way to the end, I've got a palindrome.

Now, if I've got a nondeterministic PDA I can just guess when to start

popping (it's guaranteed to make the right guess if there is a right

guess to be made). That's what makes the equivalence of deterministic

and nondeterministic FSMs counterintuitive: you'd think being able to

guess would make it more powerful in the sense that nPDAs are more

powerful than dPDAs, but it doesn't matter.

The parser in a compiler implements a PDA (that's not quite true, but

it'll pass in this discussion).

Turing Machines are the most powerful machines in the hierarchy; they

act like computers with infinite memory (the access to the memory is

very primitive, but that's a detail we don't need to worry about

here). One interesting thing about TMs is that it's possible to have

a langauge, and a string, and not be able to determine whether the

string is in the language or not! Even stranger, there are languages

for which we can tell if a string is in the langauge, but not

necessarily whether it's not; languages for which we can tell that a

string is definitely not in the language, but we can't tell for sure

whether it is; and (thankfully) languages for which can in fact tell

for sure. The Halting Problem is actually an example of a language

for which we can tell if a string is in the language but can't tell if

it isn't: let the language in question be ``the set of all programs

written in C that take no input.'' We can always tell if a program

halts: just watch and see what happens. We can sometimes tell if a

program doesn't halt (hmmmm, the first executable line is while(1);).

But there are C programs that we just can't tell for sure whether they

will ever halt (there's nothing in there that we can look at and tell

whether it's going to halt, and it's been running for three years now

without halting.... but is it really in a hard loop, or is it going

to terminate Real Soon Now?).

--

Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605

Department of Computer Science FAX -- (505) 646-1002

New Mexico State University http://www.cs.nmsu.edu/~pfeiffer

VL 2000 Homepage: http://www.cs.orst.edu/~burnett/vl2000/

Jan 5, 2001, 12:31:36 AM1/5/01

to

"Reetinder P. S. Sidhu" <si...@halcyon.usc.edu> wrote in message

news:932ro2$j6e$1...@halcyon.usc.edu...

news:932ro2$j6e$1...@halcyon.usc.edu...

> I'm a PhD student whose current research focuses on nondeterministic

> FSMs. It seems to me that there are interesting ways of implementing

> them efficiently in hardware. I was wondering if nondeterministic FSMs

> have previously been implemented in VLSI/programmable logic and if so,

> for what applications.

> FSMs. It seems to me that there are interesting ways of implementing

> them efficiently in hardware. I was wondering if nondeterministic FSMs

> have previously been implemented in VLSI/programmable logic and if so,

> for what applications.

[NFA=nondeterministic finite state automaton / FSM

DFA=deterministic finite state automaton / FSM]

Perhaps someday when/if we have quantum computing machines, we can build

NFAs directly, using qubit-tuples to represent states. But that is rather

far off.

To implement an NFA in hardware today, you could convert it to a DFA using

the subset construction, that is, you build an equivalent DFA whose states

are those subsets of the set of states of the NFA that are reachable by a

given input sequence (and any number of epsilon-transitions). See e.g.

Compilers: Principles, Techniques, and Tools, Aho, Sethi, Ullman,

pp.117-121, or the google-identified lecture notes that start at

http://tina.lancs.ac.uk/computing/courses/year2/230/233/L3/sld025.htm.

See also my note on Regular Expressions and Parsing in FPGAs, at

www.fpgacpu.org/usenet/re.html.

Jan Gray, Gray Research LLC

FPGA CPU News: www.fpgacpu.org

Jan 5, 2001, 1:20:04 AM1/5/01

to

Joe Pfeiffer wrote:

>

> "Eric C. Fromm" <efr...@sgi.com> writes:

> >

> > OK, I get the 'multiple candidates for next state' bit. You lost me with

> > the 'expressive power' assertion though. What is this characteristic and

> > how do DFAs and NFAs compare to other constructs (and what are those

> > constructs)?

>

> Wow -- the best answer to that question is probably to take a class in

> formal language theory... but here's a capsule summary.

>

> "Eric C. Fromm" <efr...@sgi.com> writes:

> >

> > OK, I get the 'multiple candidates for next state' bit. You lost me with

> > the 'expressive power' assertion though. What is this characteristic and

> > how do DFAs and NFAs compare to other constructs (and what are those

> > constructs)?

>

> Wow -- the best answer to that question is probably to take a class in

> formal language theory... but here's a capsule summary.

I nominate this as 'c.a Post of the Month'.

Very nice, Joe!

Terje

--

- <Terje.M...@hda.hydro.com>

Using self-discipline, see http://www.eiffel.com/discipline

"almost all programming can be viewed as an exercise in caching"

Jan 5, 2001, 2:22:30 AM1/5/01

to

"Paul DeMone" <pde...@igs.net> wrote in message news:3A552F9E...@igs.net...

> "Reetinder P. S. Sidhu" wrote:

> >

> > Hi all

> >

> > I'm a PhD student whose current research focuses on nondeterministic

> > FSMs. It seems to me that there are interesting ways of implementing

> > them efficiently in hardware. I was wondering if nondeterministic FSMs

> > have previously been implemented in VLSI/programmable logic and if so,

> > for what applications.

> >

> > I would be grateful for any info/pointers regarding above. Thanks in

> > advance.

>

> What exactly do you mean? An FSM with probablistic state transitions

> governed by a pseudo random sequence generator like an LFSR or a linear

> congruence modulo multiplier? Or governed by a truely random noise

> source like amplified thermal background noise?

> "Reetinder P. S. Sidhu" wrote:

> >

> > Hi all

> >

> > I'm a PhD student whose current research focuses on nondeterministic

> > FSMs. It seems to me that there are interesting ways of implementing

> > them efficiently in hardware. I was wondering if nondeterministic FSMs

> > have previously been implemented in VLSI/programmable logic and if so,

> > for what applications.

> >

> > I would be grateful for any info/pointers regarding above. Thanks in

> > advance.

>

> What exactly do you mean? An FSM with probablistic state transitions

> governed by a pseudo random sequence generator like an LFSR or a linear

> congruence modulo multiplier? Or governed by a truely random noise

> source like amplified thermal background noise?

"Nondeterministic" has nothing to do with randomness.

A "Nondeterminsitic Finite State Machine" would not chose which

transition out of a state it would take: it takes all of them : that's

the "nondeterminstic" part of it.

The reason such a machine is interesting is because it

can solve O(NP) problems in polynomial-bounded time.

However, the amount of hardware it takes to do so is

pretty large.

IIRC, determining whether a proposed solution to an O(NP)

problem actually does solve it is a O(P) problem. That is.

a deterministic FSM can test in polynomial-bounded

time whether a particular proposal solves an NP problem .

Therefor, one direct way to build the equivalent of a

nondeterminstic FSM is to build one deterministic FSM

(and we all know how to do that :) for each possible solution

to the NP problem, each of which checks whether its

solution is a correct one, and operate them all in parallel.

If the domain of possible solutions is finite and small enough,

this isn't entirely impractical, and since all the FSMs are

identical except for the solution they are testing,

designing such hardware would be straightforward.

I imagine there might be some interesting problems

where the hardware to do this might be justified by

the increase in speed, and especially be the increase

in worst-case time-to-solution.

--

Dennis O'Connor dm...@primenet.com

Vanity Web Page: http://www.primenet.com/~dmoc/

Jan 5, 2001, 4:38:50 AM1/5/01

to

amoli...@visi-dot-com.com writes:

> [ by the way, I forget, or never knew, whether one NFA can have

> more than one equivalent DFA. I used the plural above, because

> it feels right, but there may always be one DFA ]

In general, both NFAs and DFAs can have redundant states, i.e., states

where the eventual outcome will be the same for any subsequent output

they are given.

In both cases, AFAIR, there are algorithms to reduce an automaton to

the smallest possible number of states, and I think those minimal

automata are unique. Authoritative answers in the dragon book, I'm

sure.

However, using a minimal NFA does not guarantee that the subset

algorithm alone will produce a minimal DFA.

Also, all of the above only applies as is to automata where you run

the whole input through and see if you end up in a success state.

Regular expressions are often unanchored at the right, and if you want

a matcher that stops as soon as it has a match, there's a few tricks

you have to use.

Lars Mathiesen (U of Copenhagen CS Dep) <tho...@diku.dk> (Humour NOT marked)

Jan 5, 2001, 6:51:09 AM1/5/01

to

tho...@diku.dk (Lars Henrik Mathiesen) writes:

>amoli...@visi-dot-com.com writes:

>> [ by the way, I forget, or never knew, whether one NFA can have

>> more than one equivalent DFA. I used the plural above, because

>> it feels right, but there may always be one DFA ]

>In general, both NFAs and DFAs can have redundant states, i.e., states

>where the eventual outcome will be the same for any subsequent output

>they are given.

>In both cases, AFAIR, there are algorithms to reduce an automaton to

>the smallest possible number of states, and I think those minimal

>automata are unique.

Minimal DFA's are unique, minimal NFA's are not.

>However, using a minimal NFA does not guarantee that the subset

>algorithm alone will produce a minimal DFA.

Indeed. And minimizing an NFA has higher complexity than minimizing a

DFA, so there is little point in doing this first.

>Also, all of the above only applies as is to automata where you run

>the whole input through and see if you end up in a success state.

>Regular expressions are often unanchored at the right, and if you want

>a matcher that stops as soon as it has a match, there's a few tricks

>you have to use.

Most applications want the longest matching prefix of a string, not

the shortest.

Torben Mogensen (tor...@diku.dk)

Jan 5, 2001, 8:26:10 AM1/5/01

to

In article <932ro2$j6e$1...@halcyon.usc.edu>,

I wonder what precisely do you call non determinitic FSMs. I see at

least 3 meanings for it:

- something like what is used in the building of scanner which can

also be described as a FSM having several active states at one

moment where all possible state transition are taken at a given

time;

- something like Petri nets which, if memory serve, can also be

described as a FSM having several active states at one moment;

one arbitrary possible transition is taken at a given time; one

transition desactivate one state and activate one or more new

states (if they where not alreay activated obviously);

- a FSM having one active state where one random possible state

transition is taken.

The first two may also be described as a deterministic FSM and I see

little interest of implementing then in an other way in VLSI. Wait,

it is the other way: deterministic FSM are implemented as non

deterministic FSM in the first meaning (every flip flop correspond

to a state, all possible transitions are always taken) :-) What I want

to say is that all these kind of machines are different descriptions

for the same reality. If you have have not seen this, I wonder if your

way of implementing them are really interestingly efficient (perhaps if

you consider other constraints that the classical one, but then it is

more the constraints that are interesting than the solution).

I know of nothing about implementing the third meaning. I currently

fail to see an application as well as an efficient implementation in

VLSI.

-- Jean-Marc

Jan 5, 2001, 9:04:28 AM1/5/01

to

Terje Mathisen wrote:

>

> Joe Pfeiffer wrote:

> >

> > "Eric C. Fromm" <efr...@sgi.com> writes:

> > >

> > > OK, I get the 'multiple candidates for next state' bit. You lost me with

> > > the 'expressive power' assertion though. What is this characteristic and

> > > how do DFAs and NFAs compare to other constructs (and what are those

> > > constructs)?

> >

> > Wow -- the best answer to that question is probably to take a class in

> > formal language theory... but here's a capsule summary.

>

> I nominate this as 'c.a Post of the Month'.

>

> Very nice, Joe!

>

> Terje

>

> Joe Pfeiffer wrote:

> >

> > "Eric C. Fromm" <efr...@sgi.com> writes:

> > >

> > > OK, I get the 'multiple candidates for next state' bit. You lost me with

> > > the 'expressive power' assertion though. What is this characteristic and

> > > how do DFAs and NFAs compare to other constructs (and what are those

> > > constructs)?

> >

> > Wow -- the best answer to that question is probably to take a class in

> > formal language theory... but here's a capsule summary.

>

> I nominate this as 'c.a Post of the Month'.

>

> Very nice, Joe!

>

> Terje

My sentiments exactly.

Jan 5, 2001, 9:44:03 AM1/5/01

to

In article <3A556794...@hda.hydro.com>,

Yes, quite informative (what I understood of it), but disappointing. I thought we

would get to make state machines with little white noise generators in them to

aid in the control of the state transitions. :-)

--

Del Cecchi

cecchi@rchland

Jan 5, 2001, 11:47:47 AM1/5/01

to

In article <933shl$1h5a$1...@node17.cwnet.frontiernet.net>,

"Dennis O'Connor" <dm...@primenet.com> wrote:

(snip)

>

> "Nondeterministic" has nothing to do with randomness.

>

> A "Nondeterminsitic Finite State Machine" would not chose which

> transition out of a state it would take: it takes all of them : that's

> the "nondeterminstic" part of it.

>

(snip)

"Dennis O'Connor" <dm...@primenet.com> wrote:

(snip)

>

> "Nondeterministic" has nothing to do with randomness.

>

> A "Nondeterminsitic Finite State Machine" would not chose which

> transition out of a state it would take: it takes all of them : that's

> the "nondeterminstic" part of it.

>

The information that I found on the WWW indicates that only one of the

branches is taken, not all of them at the same time. If it took all of

the possible branches at the same time, then it would be deterministic,

and would end up in multiple states simultaneously. I thought that

each NFA set description in the configuration sequence indicates the

set of possible states (only one of which is active) at that point in

time.

--

Greg Neff

VP Engineering

*Microsym* Computers Inc.

gr...@guesswhichwordgoeshere.com

Jan 5, 2001, 1:48:24 PM1/5/01

to

In article <934trg$p29$1...@nnrp1.deja.com>, Greg Neff wrote:

>In article <933shl$1h5a$1...@node17.cwnet.frontiernet.net>,

> "Dennis O'Connor" <dm...@primenet.com> wrote:

>(snip)

>>

>> "Nondeterministic" has nothing to do with randomness.

>>

>> A "Nondeterminsitic Finite State Machine" would not chose which

>> transition out of a state it would take: it takes all of them : that's

>> the "nondeterminstic" part of it.

>>

>(snip)

>

>The information that I found on the WWW indicates that only one of the

>branches is taken, not all of them at the same time. If it took all of

>the possible branches at the same time, then it would be deterministic,

>and would end up in multiple states simultaneously. I thought that

>each NFA set description in the configuration sequence indicates the

>set of possible states (only one of which is active) at that point in

>time.

[snip]>In article <933shl$1h5a$1...@node17.cwnet.frontiernet.net>,

> "Dennis O'Connor" <dm...@primenet.com> wrote:

>(snip)

>>

>> "Nondeterministic" has nothing to do with randomness.

>>

>> A "Nondeterminsitic Finite State Machine" would not chose which

>> transition out of a state it would take: it takes all of them : that's

>> the "nondeterminstic" part of it.

>>

>(snip)

>

>The information that I found on the WWW indicates that only one of the

>branches is taken, not all of them at the same time. If it took all of

>the possible branches at the same time, then it would be deterministic,

>and would end up in multiple states simultaneously. I thought that

>each NFA set description in the configuration sequence indicates the

>set of possible states (only one of which is active) at that point in

>time.

If any path puts the NFA in a final state, the input string is accepted.

You can visualize this as the NFA taking all paths simultaneously, or as the

NFA "choosing" the "right" path nondeterministically.

"It's only a model" -- Patsy

Jan 5, 2001, 1:03:35 PM1/5/01

to

Right. That property lets you create machines that accept certain

classes of input strings (languages) with fewer states than

normal deterministic FSMs, and possibly, I forget, creat machines

that accept languages that deterministic FSMs can't distinguish.

Now, will somebody please explain to me why NFSMs are worth

talking about? I learned about and taught this gorp back in the

late 60s. (I certainly wouldn't waste time doing so now.) It was

sort of cute theory, not much content that I oculd see, with

little to recommend it in practice. Can't say that I see much

difference now, and there are lots more interesting things to

talk about.

Greg Pfister

<not my employer's opinion>

Jan 5, 2001, 2:53:02 PM1/5/01

to

In article <9354to$dde$1...@news01.cit.cornell.edu>,

gl...@cornell.edu wrote:

(snip)

>

> If any path puts the NFA in a final state, the input string is

accepted.

> You can visualize this as the NFA taking all paths simultaneously, or

as the

> NFA "choosing" the "right" path nondeterministically.

>

(snip)

gl...@cornell.edu wrote:

(snip)

>

> If any path puts the NFA in a final state, the input string is

accepted.

> You can visualize this as the NFA taking all paths simultaneously, or

as the

> NFA "choosing" the "right" path nondeterministically.

>

Okay, so if I had to implement this in hardware (as requested by the OP)

then I can see two different approaches:

1) Follow all paths simultaneously. This is completely deterministic

from an implementation perspective. In this case, multiple states can

be active at the same time, and the final state *will* be active if the

input string is acceptable. From a hardware perspective this is

trivial, so I don't know why the OP would need assistance in

implementing this.

2) Non-deterministically pick one path when more than one path is

possible from each state. In this case, only one state will be active

at any given point in time, and the final state *may* be active if the

input string is acceptable. I'm thinking that this is what the OP was

asking about, but I'm not sure.

Jan 5, 2001, 6:13:10 PM1/5/01

to

Greg Pfister <pfi...@us.ibm.com> writes:

> Right. That property lets you create machines that accept certain

> classes of input strings (languages) with fewer states than

> normal deterministic FSMs, and possibly, I forget, creat machines

> that accept languages that deterministic FSMs can't distinguish.

>

> Now, will somebody please explain to me why NFSMs are worth

> talking about? I learned about and taught this gorp back in the

> late 60s. (I certainly wouldn't waste time doing so now.) It was

> sort of cute theory, not much content that I oculd see, with

> little to recommend it in practice. Can't say that I see much

> difference now, and there are lots more interesting things to

> talk about.

I'm not Mr. Hardware, but it occurs to me that there might be some use

for networking gear - switches, routers, maybe firewalls. If I can

add new MACs or IPs to a NFA instead of a flat table that I have to

iterate through, it could be a speedup. Maybe CAMs make this moot,

though.

Kelly

Jan 5, 2001, 6:01:55 PM1/5/01

to

Del Cecchi wrote:

> Yes, quite informative (what I understood of it), but disappointing. I thought we

> would get to make state machines with little white noise generators in them to

> aid in the control of the state transitions. :-)

> Yes, quite informative (what I understood of it), but disappointing. I thought we

> would get to make state machines with little white noise generators in them to

> aid in the control of the state transitions. :-)

You could synthesize the NFA, and use Q-bits instead of flip-flops to

hold the states. According to the theory of quantum computers, you

should have an efficient hardware implementation of a NFA (i.e. you need

just log2(n) Q-bits, even if the corresponding DFA requires 2^n states,

and therefore at least n Flip-Flops).

The labs at IBM seem to have produced some Q-bits lately.

--

Bernd Paysan

"If you want it done right, you have to do it yourself"

http://www.jwdt.com/~paysan/

Jan 5, 2001, 8:05:45 PM1/5/01

to

Greg Pfister <pfi...@us.ibm.com> writes:

> Now, will somebody please explain to me why NFSMs are worth

> talking about? I learned about and taught this gorp back in the

> late 60s. (I certainly wouldn't waste time doing so now.) It was

> sort of cute theory, not much content that I oculd see, with

> little to recommend it in practice. Can't say that I see much

> difference now, and there are lots more interesting things to

> talk about.

Because regular expressions are (most of the time) the most

straightforward way of describing the set you want to recognize

(assuming a FSM can do it at all), and REs translate easily to NFSMs.

Then you use a NFSM -> DFSM construction to get something executable.

Also, of coures, it's intuitively obvious that NFSMs *have* to be more

powerful than DFSMs, so the fact that the construction is so easy

makes a good warmup for hitting the same fact later in Turing

Machines.

Jan 5, 2001, 8:29:18 PM1/5/01

to

Greg Neff <greg...@my-deja.com> writes:

> Okay, so if I had to implement this in hardware (as requested by the OP)

> then I can see two different approaches:

>

> 1) Follow all paths simultaneously. This is completely deterministic

> from an implementation perspective. In this case, multiple states can

> be active at the same time, and the final state *will* be active if the

> input string is acceptable. From a hardware perspective this is

> trivial, so I don't know why the OP would need assistance in

> implementing this.

This has to be what he meant. The thing is, a naive translation from

NFSM to DFSM takes 2^N states, where N is the number of states in the

NFSM. There are ways to reduce the number of states, but I don't know

if there is any sort of proved optimum, in general. The OP's point

was that he was looking for efficient representation.

>

> 2) Non-deterministically pick one path when more than one path is

> possible from each state. In this case, only one state will be active

> at any given point in time, and the final state *may* be active if the

> input string is acceptable. I'm thinking that this is what the OP was

> asking about, but I'm not sure.

Unless the OP is using words in nonstandard ways, this just isn't

possible. You see, it's not just that the automaton makes a choice,

it always makes the *right* choice, based on information it doesn't

have yet. Can't do that, sorry!

Jan 8, 2001, 6:19:12 AM1/8/01

to

In article <3A560C77...@us.ibm.com>,

Greg Pfister <pfi...@us.ibm.com> wrote:

>Now, will somebody please explain to me why NFSMs are worth

>talking about? I learned about and taught this gorp back in the

>late 60s. (I certainly wouldn't waste time doing so now.) It was

>sort of cute theory, not much content that I oculd see, with

>little to recommend it in practice. Can't say that I see much

>difference now, and there are lots more interesting things to

>talk about.

Greg Pfister <pfi...@us.ibm.com> wrote:

>Now, will somebody please explain to me why NFSMs are worth

>talking about? I learned about and taught this gorp back in the

>late 60s. (I certainly wouldn't waste time doing so now.) It was

>sort of cute theory, not much content that I oculd see, with

>little to recommend it in practice. Can't say that I see much

>difference now, and there are lots more interesting things to

>talk about.

If you're a hardware person, DFSMs are all over the place, eg

multiprocessor cache protocols, hardware cache/TLB refills, or for that

matter hardwired within-an-instruction sequencing logic (the sort that

might otherwise be done by microcode). I don't know for sure, but

perhaps NFSMs might come in handy for investigating or reasoning about

the set of all reachable states of a DFSM for design-validation purposes?

If you're a software person, think of an NFSM as one way of implementing

regular expressions (regexps)... which are the essence of egrep(1) and

kindred programs, and are very important components of Perl, the more

powerful command-line shells, all Real Programmer's Editors, and assorted

other software which escapes my mind at the moment.

There's a considerable (abeit rather scattered) literature on implementing

regexps, with several good freeware packages, occasional usenet flamefests

on which is "better" (faster, less buggy, ...). POSIX 1003.2 defines a

(actually 2 different flavors of) "standard" regexp syntax/semantics, but

I don't know if typical regexp packages implement it or a superset.

If you're a software person of the compiler-writing flavor, NFSMs are

an essential part of the formal languages background you need to solidly

grok how a lexical-analyser generator (LEX, FLEX, ...) or parser generator

(YACC, Bison, ANTLR, ...) works.

--

-- Jonathan Thornburg <jth...@thp.univie.ac.at>

http://www.thp.univie.ac.at/~jthorn/home.html

Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik

Q: Only 7 countries have the death penalty for children. Which are they?

A: Congo, Iran, Nigeria, Pakistan[*], Saudi Arabia, United States, Yemen

[*] Pakistan moved to end this in July 2000. -- Amnesty International,

http://www.web.amnesty.org/ai.nsf/index/AMR511392000

Jan 9, 2001, 4:25:44 AM1/9/01

to

Mark W Brehob <bre...@cse.msu.edu> writes:

> If you mean a FSM which randomly goes to different states based on certain

> probabilities, I really have no clue if it has been done, nor do any

> applications jump out at me.

This sounds like a Markov chain/model to me. Is there a relationship between

NFAs and Markov models?

If you're using speech recognition, you're using one variant of Markov models

called HMMs. I consider a type of hypothesis testing based on transition

probabilities (usually measured heuristically or estimated in some way).

Jan

Jan 12, 2001, 3:15:53 PM1/12/01

to

In article <TCb56.8725$4c.7...@ruti.visi.com>,

amoli...@visi-dot-com.com wrote:

amoli...@visi-dot-com.com wrote:

> In article <3A552F9E...@igs.net>, Paul DeMone <pde...@igs.net> wrote:

> >What exactly do you mean? An FSM with probablistic state transitions

> >governed by a pseudo random sequence generator like an LFSR or a linear

> >congruence modulo multiplier? Or governed by a truely random noise

> >source like amplified thermal background noise?

>

> _Compilers:_Principles,_Techniques,_and_Tools_

> discusses this. What you do with an NFA is that you essentially

> "take" all the transitions that are possible in a state,

> at once.

>

> Your current "state" is really the set of all the states

> of the NFA (Non-deterministic Finite Automaton) that you could

> be in. You take all the possible transitions out of all those

> states that you could be in. I think if one of the new

> states is a terminating state, you stop.

From a different point of view, all this comes across as very much like

"quantizing" traditional FSMs. Recall that to quantize a classical

"system", you construct the free complex vector space over the state space

of the classical system, ie the elements, |psi> of your quantized theory

are vectors associating a complex number with every possible state of of

the underlying classical system. You then define some linear scheme for

these states to evolve which is trivially defined by the underlying FSM.

The only real difference would appear to be that the NFA is filling that

vector with 1s and 0s rather than complex numbers. Obviously this works

well for simulation and hardware construction. For more general discussion

of mathematical properties, the more general free complex vector space

(with 0 treated as 0 and non-zero treated as 1 at the end of the day) may

be more powerful.

Maynard

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu