Labview

290 views
Skip to first unread message

Paul Wareham

unread,
Aug 6, 2013, 2:16:49 PM8/6/13
to flow-based-...@googlegroups.com
What is the opinion of the group as to whether Labview would be considered an implementation of Flow Based Programming?  If you agree, what would you say are the pros and the cons of the Labview approach?

Thanks!

David Barbour

unread,
Aug 8, 2013, 4:00:51 PM8/8/13
to flow-based-...@googlegroups.com
LabVIEW (and the G language) is certainly a reactive and dataflow model of some kind, but I don't know enough to say more. FBP has a few characteristics that help distinguish it: use of bounded buffers, execution is not controlled by clock, ad-hoc side-effects, a component with multiple inputs can potentially block on one input even though data is available on the other, etc.. What characteristics does LabVIEW have?
 


On Tue, Aug 6, 2013 at 11:16 AM, Paul Wareham <enigm...@gmail.com> wrote:
What is the opinion of the group as to whether Labview would be considered an implementation of Flow Based Programming?  If you agree, what would you say are the pros and the cons of the Labview approach?

Thanks!

--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Paul Morrison

unread,
Aug 8, 2013, 4:48:35 PM8/8/13
to flow-based-...@googlegroups.com

Just curious - what do you mean by "ad hoc side-effects"? 

TIA

Paul M.

David Barbour

unread,
Aug 8, 2013, 5:29:37 PM8/8/13
to flow-based-...@googlegroups.com
For example, any FBP process has the ability to write files, manipulate robotic arms, send e-mail, etc.. 

Ron Lewis

unread,
Aug 8, 2013, 5:43:36 PM8/8/13
to flow-based-...@googlegroups.com


On Tuesday, August 6, 2013 2:16:49 PM UTC-4, Paul Wareham wrote:
What is the opinion of the group as to whether Labview would be considered an implementation of Flow Based Programming?  If you agree, what would you say are the pros and the cons of the Labview approach?

I think this is a very interesting question. I never used Labview. But I did see an engineer use it. I think this link might give an introductory video to help us get oriented.


So, we could ask:
  1. Can FBP be done within Labview?
  2. Maybe #1 is No. Then, Could we use Labview to produce an FBP component?
  3. Or maybe both #1 and #2 is Yes?, That is, you can use Labview to diagram an FBP system, and you can use Labview to produce components used in the FBP system?
We can also ask, if Labview is not FBP, what is the distinguishing difference that makes Labview not FBP?

David Barbour

unread,
Aug 8, 2013, 6:29:00 PM8/8/13
to flow-based-...@googlegroups.com
It seems LabVIEW is a synchronous dataflow language. All the inputs must be available for a corresponding 'instant' before the outputs become available for the same logical 'instant'.

http://en.wikipedia.org/wiki/Synchronous_programming_language

(There are other kinds of dataflow models. A lot of people favor asynchronous dataflow.)


 


Paul Tarvydas

unread,
Aug 8, 2013, 6:47:02 PM8/8/13
to flow-based-...@googlegroups.com
On 13-08-08 06:29 PM, David Barbour wrote:
> It seems LabVIEW is a synchronous dataflow language. All the inputs
> must be available for a corresponding 'instant' before the outputs
> become available for the same logical 'instant'.
>
> http://en.wikipedia.org/wiki/Synchronous_programming_language
>
> (There are other kinds of dataflow models. A lot of people favor
> asynchronous dataflow.)

1. Yes, that is a biggie.

Implicit synchronization == bad.

2. I, also, don't get the impression that LabView is hierarchical, e.g.
can you write LabView components that implement new LabView components?

3. I have seen (some) LabView in use. The diagrams are all over the
place (too low level, some high level, weird (IMO) sequencing of
diagrams). Not clean enough for real programming... (IMO)

pt

David Barbour

unread,
Aug 8, 2013, 7:40:19 PM8/8/13
to flow-based-...@googlegroups.com

On Thu, Aug 8, 2013 at 3:47 PM, Paul Tarvydas <paulta...@gmail.com> wrote:

Implicit synchronization == bad.

I'm not inclined to agree. Why do you believe so?
 

2.  I, also, don't get the impression that LabView is hierarchical, e.g. can you write LabView components that implement new LabView components?

You can. They call them "SubVIs". Tutorial: http://www.ni.com/white-paper/7593/en/
 

3.  I have seen (some) LabView in use.  The diagrams are all over the place (too low level, some high level, weird (IMO) sequencing of diagrams).  Not clean enough for real programming... (IMO)

Lol. Quite a few people recognize "real programming" in terms of how unclean it tends to become.



Paul Tarvydas

unread,
Aug 9, 2013, 7:54:00 PM8/9/13
to flow-based-...@googlegroups.com
On 13-08-08 07:40 PM, David Barbour wrote:

On Thu, Aug 8, 2013 at 3:47 PM, Paul Tarvydas <paulta...@gmail.com> wrote:

Implicit synchronization == bad.

I'm not inclined to agree. Why do you believe so?

I've formed the opinion/observation that a lot of problems in s/w are due to unnecessary synchronization (e.g. call/return), which have resulted in epicycles like preemptive multitasking which result in further epicycles, or the inability to test components independently, etc.

IMO, the one-way asynchronous event is the "atom" of software (or, better, the "aetheron" :-).

With it, I can construct synchronization when I want synchronization, but I am free to not have synchronization when I don't need it.

The key word above is "implicit".  I prefer "explicit".


 

2.  I, also, don't get the impression that LabView is hierarchical, e.g. can you write LabView components that implement new LabView components?

You can. They call them "SubVIs". Tutorial: http://www.ni.com/white-paper/7593/en/

OK, that's interesting, thanks.

I can't put my finger on it, but I still find that man-page unsatisfactory.

Is it that I know that the modules are synchronous dataflow? 

Is it that the operations are much too low-level?  Employing Multiply and Add components is silly (IMO).  A text expression is completely sufficient for such operations.  (Admission: we did create such low-level components at first, but that was like 20 years ago).

In my mind there is a definite chasm between brick-laying (text languages, the SubVIs examples, etc.) and architecting.  It's clear that I cannot express that difference well (other than by apprenticeships :-).


 

3.  I have seen (some) LabView in use.  The diagrams are all over the place (too low level, some high level, weird (IMO) sequencing of diagrams).  Not clean enough for real programming... (IMO)

Lol. Quite a few people recognize "real programming" in terms of how unclean it tends to become.


I used the wrong word.  It should have read "real architecting".

You have an interesting point, though.  The thing that I call architecting is a struggle to find simple notations ([sic] - multiple notations if necessary) that succinctly describe an engineered solution to a set of competing problems/constraints, in such a way as to best communicate (as well as possible) the solution to all of the stakeholders...  A set of snap-together DSL's that describe the whole solution from various perspectives.

pt

David Barbour

unread,
Aug 9, 2013, 8:58:15 PM8/9/13
to flow-based-...@googlegroups.com
On Fri, Aug 9, 2013 at 4:54 PM, Paul Tarvydas <paulta...@gmail.com> wrote:
On 13-08-08 07:40 PM, David Barbour wrote:

On Thu, Aug 8, 2013 at 3:47 PM, Paul Tarvydas <paulta...@gmail.com> wrote:

Implicit synchronization == bad.

I'm not inclined to agree. Why do you believe so?

I've formed the opinion/observation that a lot of problems in s/w are due to unnecessary synchronization (e.g. call/return), which have resulted in epicycles like preemptive multitasking which result in further epicycles, or the inability to test components independently, etc.

Yeah, I once made a similar observation. But I've also made a few more: A lot of problems in s/w are due to insufficient synchronization (e.g. race conditions, heisenbugs, inconsistent state). Also, a lot of problems in s/w are due to incorrect use of explicit synchronization (e.g. deadlocks, starvation, priority inversions). 

So I wonder: maybe the problem is deeper, such as the procedural model, or the concept of 'events'.
 

IMO, the one-way asynchronous event is the "atom" of software (or, better, the "aetheron" :-).

I describe several concerns about 'events' on my blog: 


 

Is it that the operations are much too low-level?  Employing Multiply and Add components is silly (IMO).  A text expression is completely sufficient for such operations.  (Admission: we did create such low-level components at first, but that was like 20 years ago).

Low-level components are fine if you're running the whole program through a compiler. (Components at that level would suck as separate processes, though.) 
 

In my mind there is a definite chasm between brick-laying (text languages, the SubVIs examples, etc.) and architecting.  It's clear that I cannot express that difference well (other than by apprenticeships :-).

Maybe it's the bottom-up construction instead of top-down design that's bothering you? 

I like doing a bit of both, and sometimes even growing outwards from the middle. :)


Tomi Maila

unread,
Aug 11, 2013, 3:03:28 PM8/11/13
to flow-based-...@googlegroups.com
I have a lot of experience and quite deep understanding of LabVIEW and its dataflow paradigm and some experience with javascript and node.js. However I am new to noflo approach to dataflow based programming model so I may not be able to provide a comprehensive comparison yet. However I will try to give a quick summary of LabVIEW in the following:

The basic idea of LabVIEW is that data flows between nodes that are both concurrently and parallelly executed by the underlying execution scheduler. Each node is executed when all of its inputs become valid. If the inputs of multiple nodes become valid at the same time, then all of the nodes get executed at the same time using the execution system. 

LabVIEW handles data "by-value". This means that when an output of a node is connected to the input of two or more nodes, each subsequent node gets its own data copy of the output value of the previous node. Well, at least this is how it appears to the user, however under the hood LabVIEW compiler tries hard to avoid data copies if not necessary (this optimization mechanism is called inplaceness and would require post of its own). The by-value approach is similar to message passing concurrency approach (erlang, scala etc.) where parallelly executing nodes never share data/references. LabVIEW supports reference types that have built-in concurrent access control i.e. only one of the parallelly executing nodes can modify the reference content at any given time. 

LabVIEW is a strongly typed language. That means all connections between nodes are strongly typed and the code is broken if type requirements are not met. LabVIEW initially was not an object oriented language but object orientation was added several years ago. The OOP model of LabVIEW is different from the typical OOP models of other languages as objects are again "by-value" objects as all other data. That means if a node passes an object to two different subsequent nodes, LabVIEW passes two independent copies of the object that know nothing of each other. Think of this as parallel universes where when a wire is branched, the objects start living their own lives in their own universe. Again, one can create references of objects to allow sharing the object between parallelly executing nodes.  

LabVIEW is a visual (graphical) language meaning that the visual model itself is the code in every respect. The visual model is compiled to binary code for different targets that target environments. With some small exceptions everything is visual in the visual code. That means that when reviewing the code, the programmer can  get a full understanding of all the functionality by looking at the visual model There is no (with some small exceptions) hidden arguments that are not visually represented.

NoFlo fbp files correspond to subVIs in LabVIEW, the basic building blocks of the language. In addition to subVIs, LabVIEW has modules called libraries (collections of subVIs) and classes (collections of subVIs called methods, together with data and type).

Unlike javascript, LabVIEW doesn't support closures or functional programming. I actually don't know yet if noflo supports functional programming.

LabVIEW has some built in structures that are not representable using subVIs. As an example of these, consider loops. In LabVIEW, a loop is a rectangle containing some code inside it where the enclosed code is executed repeatedly. 

LabVIEW supports recursion but doesn't support tail recursion. Hence recursing tens of thousands of levels is resource intensive and it's recommended to use other ways of recursing such memory structures. An interesting aspect of parallel dataflow and recursion is that you can split the execution of the code in a recursive call to multiple parallel branches and keep doing this every level of recursion. This way you can easily massively parallelize a task to a multi-core environment. All this is inherent to the parallel dataflow nature of the language. 

Hope this summarizes what LabVIEW is and is not. I am planning to come to the Wednesday meetup at San Francisco should anyone want to ask more detailed questions.

Thanks 

Tomi

David Barbour

unread,
Aug 11, 2013, 3:39:57 PM8/11/13
to flow-based-...@googlegroups.com
Thank you for the overview. 


--

Tomi Maila

unread,
Aug 11, 2013, 5:33:10 PM8/11/13
to flow-based-...@googlegroups.com
As a clarification LabVIEW data buffers can be thought to be single element queues. When a node executes, it places data to its output queues. When any subsequent nodes execute, they pull the elements from the queues representing all of their inputs. Hence a node in LabVIEW data flow is not continously running and waiting for inputs but instead it is executed once and only once when its inputs become all available. A node in a LabVIEW diagram gets executed again when the whole diagram or part of it (e.g. in a loop) is executed again. I guess this is what is referred to as synchronous data flow in the FBP community.

Synchronous and asynchronous data flow are somewhat different programming models and both of them have their benefits. I haven't yet programmed any real world applications with pure asynchronous data flow, so it's hard for me to analyze the challenges.

Paul Tarvydas

unread,
Aug 15, 2013, 4:29:47 PM8/15/13
to flow-based-...@googlegroups.com
>> A lot of problems in s/w are due to insufficient synchronization (e.g. race conditions, heisenbugs, inconsistent state). Also, a lot of problems in s/w are due to incorrect use of explicit synchronization (e.g. deadlocks, starvation, priority inversions).

>> So I wonder: maybe the problem is deeper, such as the procedural model, or the concept of 'events'.

[I am re-rereading your blog about events, but fyi, this is what I'm thinking (before you blog does or doesn't change my mind :-) ]

Observation: One doesn't see FBP-ers discussing priority inversion, nor priorities, nor race conditions, etc.  These problems aren't encountered (much) when thinking fbp.

Observations:

-4: Priority inversion is an unintended consequence of the epicycle called "priorities".

-3: Priorities are epicycles meant to fix the damage caused by the epicycle called "fully preemptive processes".

-2: Fully preemptive processes are epicycles meant to fix the damage caused by the epicycle called "call-return" (implicit syncronization).

-1: Call-return is an epicycle intended to provide "reuse" (aka libraries).  Unfortunately, call-return was invented/calcified during the "goto considered harmful" hysteria period.  At the time, a plethora of software phyla were emerging (e.g. Engelbart, Sutherland, McCarthy, Iverson, et al).  The "goto winter" snuffed out all of this research and replaced it with epicyclic stack-based ridicula, meant to allow (unimportant) long-running mathematical processing loops to share time with other practically important tasks.

0: At that time the science of design using asynchronous components was totally understood, well-documented and taught in universities.  Field of study == Digital Hardware Design.  Race conditions == understood.  Time == understood.  Techniques to detect and deal with these issues were taught at the undergrad level.

+1: So, maybe the only thing we need to do is to make Software work like Digital Hardware.  The rest is "easy" and already well-documented.  Brad Cox took a stab at this idea with his "Software IC" concept.  And the Smalltalker's did so too.  They missed the idea that asynchronous components need to be truly asynchronous - not implicitly coupled by call-return.

Go back to the stackless IBM360 and start using goto's again :-).  Structure goto's using diagrams, not ASCII.

So, maybe my interpretation is that the problem is, indeed, deeper and the solution is to strip off the decades of epicyclic sediment that has accumulated, but, not to throw out the baby with the muddy bathwater.

FBP box and wire diagrams are functional.  FBP leaf-level components are (can be) stateful.  The best of both worlds with no need for confusion! :-).

I, also, put huge value on the ability to draw concrete, compilable diagrams - for Architecture, documentation, future maintenance and communication between stakeholders (not just technical people, but communication with management and (non-programmer) domain experts).  Likewise, the simplicity of the notation and the designs which emerge when thinking fbp.  My intolerance for text-only languages grows inversely proportional to the responsibility I acquire.

pt


John Cowan

unread,
Aug 15, 2013, 5:07:30 PM8/15/13
to flow-based-...@googlegroups.com
Paul Tarvydas scripsit:

> Observation: One doesn't see FBP-ers discussing priority inversion, nor
> priorities, nor race conditions, etc. These problems aren't encountered
> (much) when thinking fbp.

Well, that's because we don't try to solve problems like "keep the speedometer
reporting the current speed while making the anti-lock braking system work"
using a single network, and we just assume that all our wires have sufficient
aggregate bandwidth that no arbitration between different wires is needed
to make use of the underlying memory bus, or whatever.

Impose enough cost constraints, and the whole issue of real time would return
in different guises.

--
John Cowan http://ccil.org/~cowan co...@ccil.org
Monday we watch-a Firefly's house, but he no come out. He wasn't home.
Tuesday we go to the ball game, but he fool us. He no show up. Wednesday he
go to the ball game, and we fool him. We no show up. Thursday was a
double-header. Nobody show up. Friday it rained all day. There was no ball
game, so we stayed home and we listened to it on-a the radio. --Chicolini

David Barbour

unread,
Aug 15, 2013, 6:59:27 PM8/15/13
to flow-based-...@googlegroups.com
On Thu, Aug 15, 2013 at 1:29 PM, Paul Tarvydas <paulta...@gmail.com> wrote:

Observation: One doesn't see FBP-ers discussing priority inversion, nor priorities, nor race conditions, etc.  These problems aren't encountered (much) when thinking fbp.

Race conditions have come up on this list several times, e.g. when the FBP is used for time-sensitive operations (rendering, music). But I agree that 'priority inversion' is not usually an issue in FBP.
 

+1: So, maybe the only thing we need to do is to make Software work like Digital Hardware. [...]
asynchronous components need to be truly asynchronous - not implicitly coupled by call-return

I believe the issues you attributed call-return are caused by it being 'sequential', not by it being 'synchronous'.

Synchronous models that model concurrent operations (such as LabVIEW, PureData, Esterel, Lustre) can work well making software look like digital hardware. (I guess, unless you've worked with a lot of different systems, it is difficult to grasp the orthogonal distinctions between "concurrent", "asynchronous", and "parallel".)

Asynchronous operation can be valuable, e.g. in contexts where we don't know how long it will take to obtain an answer, or where agents are not online at the same time. But asynchrony describes different conditions than concurrency, and has different consequences. 
 

FBP box and wire diagrams are functional.  FBP leaf-level components are (can be) stateful.  The best of both worlds with no need for confusion! :-).

Unfortunately, many internal components also must be stateful (e.g. waiting for multiple 'corresponding' inputs to arrive).

 
I, also, put huge value on the ability to draw concrete, compilable diagrams - for Architecture, documentation, future maintenance and communication between stakeholders (not just technical people, but communication with management and (non-programmer) domain experts).  Likewise, the simplicity of the notation and the designs which emerge when thinking fbp.  My intolerance for text-only languages grows inversely proportional to the responsibility I acquire.

I agree there is a lot of value in visual programming. Though, I feel there should be a good textual language behind every visual environment. (And by 'good' I mean it should be suitable for human programming.)


Paul Tarvydas

unread,
Aug 15, 2013, 7:25:19 PM8/15/13
to flow-based-...@googlegroups.com
On 15/08/2013 4:29 PM, Paul Tarvydas wrote:
> My intolerance for text-only languages grows inversely proportional to
> the responsibility I acquire.
D'oh, edit: "tolerance".

Paul Wareham

unread,
Aug 15, 2013, 8:20:04 PM8/15/13
to flow-based-...@googlegroups.com


On Thursday, August 15, 2013 5:29:47 PM UTC-3, Paul Tarvydas wrote:


+1: So, maybe the only thing we need to do is to make Software work like Digital Hardware.  The rest is "easy" and already well-documented.  Brad Cox took a stab at this idea with his "Software IC" concept.  And the Smalltalker's did so too.  They missed the idea that asynchronous components need to be truly asynchronous - not implicitly coupled by call-return. 

I really like the idea of the Software IC so I'm inclined to believe this is a good idea to make software more like digital logic.  However, we have to ask the question about what is the right tool for the job.  If we are expending a lot of energy contorting ourselves to make an inherently sequential calculating machine into something that simulate a digital logic IC, then we need to consider: why not actually use digital logic?  These days we have (reasonably) low cost 'soft hardware' in the form of FPGAs that can naturally implement those components in a reconfigurable fashion.  

Or better yet, use a hybrid chip and do the number crunching in von Neumann's machine and the logic functions in digital logic.  

But as I said, I want to believe, so let's talk about it  :-)

pw

Matthew Lai

unread,
Aug 15, 2013, 9:21:10 PM8/15/13
to flow-based-...@googlegroups.com
How about the GA144 multi-computer chip?

http://www.greenarraychips.com/

You probably need a Forth-FBP to use those computers!

Matt

Paul Wareham

unread,
Aug 15, 2013, 9:45:01 PM8/15/13
to flow-based-...@googlegroups.com
That's a pretty cool idea.  Although their web site looks a bit amateurish for a chip manufacturer.  

pw

Brad Cox

unread,
Aug 16, 2013, 7:25:15 AM8/16/13
to flow-based-...@googlegroups.com
No, we didn't "miss the idea". It just took a year or so to implement it in C. That was done in the TaskMaster library a year or so after the Software-IC papers. A paper on that should be still on the web somewhere. My asynchronous components have always been truly asynchronous (coroutine-based) not coupled by call-return (subroutine-based). 

Paul Tarvydas

unread,
Aug 16, 2013, 10:51:01 AM8/16/13
to flow-based-...@googlegroups.com
I apologize for getting that wrong [1].  (I'm left to wonder whether I never knew this, or simply forgot it).

For everyone else, here's a reference to TaskMaster:

http://www.virtualschool.edu/cox/pub/TaskMaster/

pt

[1] Is the statement still correct wrt Smalltalk, or did I get that wrong, too?

Brad Cox

unread,
Aug 16, 2013, 11:07:32 AM8/16/13
to flow-based-...@googlegroups.com
You may be right about Smalltalk. TaskMaster was my first effort at plowing new ground in Objective-C

Incidentally, coroutines (lightweight threads) have been a fascination since my PDP8/I days (even before Unix came on 9 track tapes from AT&T). Coroutines and subroutines were both explained in the same section of its assembly language reference manual. I've been pecking at the edges of that idea ever since.

FWIW: LabView's been around since around since about the same time period.
> --
> You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

Dr. Brad J. Cox Cell: 703-594-1883 Blog: http://bradjcox.blogspot.com http://virtualschool.edu




Paul Tarvydas

unread,
Aug 16, 2013, 11:54:58 AM8/16/13
to flow-based-...@googlegroups.com
Might this be the manual?

http://www.pdp8online.com/pdp8cgi/query_docs/tifftopdf.pl/pdp8docs/dec-08-cmaa-d.pdf

Believe it or not, finding a very low-level description of coroutines might be useful to me (in explaining it, simply, to others).

I'm browsing here http://www.pdp8online.com/query_docs/query_all_files.html (not all of the manuals listed seem to be available).

Your TaskMaster doc appears to have been written in 1990.  That would be exactly the time I was writing an 8-pass compiler for a StateChart-like language, thinking there's got to be a better way (and within a year, I backed into the fbp-ish coroutine idea (using Smalltalk/V)....  I suspect that your book and Harel's StateChart paper and my playing around with deferred condition variables in hand-rolled RTOS' were my inspirations...  Interesting that LabView (and ProGraph, iirc) popped up around that time, too.

pt

Brad Cox

unread,
Aug 16, 2013, 12:02:40 PM8/16/13
to flow-based-...@googlegroups.com
That one doesn't mention coroutines so I might have misremembered. May have been one of the others; PDP11, PDP12 were others I used around then.

John Cowan

unread,
Aug 16, 2013, 12:36:09 PM8/16/13
to flow-based-...@googlegroups.com
Brad Cox scripsit:

> That one doesn't mention coroutines so I might have misremembered. May
> have been one of the others; PDP11, PDP12 were others I used around
> then.

I don't remember any discussion of coroutines in connection with the
PDP-8. There was a subroutine instruction, but it was not even recursive
(it stored the PC in the specified effective address, and jumped to the
following address, allowing return by indirect jump). The PDP-12 was
just a hybrid PDP-8 and LINC processor, with only one running at a time.

On the PDP-11, coroutining was straightforward, and I can well believe
it was explained in an assembly-language manual. I did much less with
assembly on the '11 than on the '8, where it was my main programming
language (okay, after Basic).

--
John Cowan co...@ccil.org http://www.ccil.org/~cowan
Does anybody want any flotsam? / I've gotsam.
Does anybody want any jetsam? / I can getsam.
--Ogden Nash, No Doctors Today, Thank You

Tom Young

unread,
Aug 16, 2013, 1:10:10 PM8/16/13
to Flow Based Programming
The best explanation of coroutines (with source code examples) I have found is in "Advanced Programming in the Unix Environment", 1993, by W. Richard Stevens.  


Thomas W. Young, Founding Member
Stamford Data, LLC
47 Mitchell Street, Stamford,  CT  06902

Phone: (203)539-1278
Email: TomY...@stamforddata.com

Tomi Maila

unread,
Aug 16, 2013, 1:20:36 PM8/16/13
to flow-based-...@googlegroups.com
The following white paper on how LabVIEW compiler works is worth reading:

Paul Wareham

unread,
Aug 16, 2013, 1:30:41 PM8/16/13
to flow-based-...@googlegroups.com


On Friday, August 16, 2013 2:20:36 PM UTC-3, Tomi Maila wrote:
The following white paper on how LabVIEW compiler works is worth reading:


Interesting description.  I'd be curious to see what others think of the compilation process they are using.  Seems like a lot of back and forth gymnastics - wondering why ya'll think it has evolved to be so complex?

pw
 

Paul Wareham

unread,
Aug 16, 2013, 1:31:23 PM8/16/13
to flow-based-...@googlegroups.com
Brad - do you have a link to your most recent Software IC papers?

pw

Brad Cox

unread,
Aug 16, 2013, 1:41:49 PM8/16/13
to flow-based-...@googlegroups.com
What I have is on http://virtualschool.edu. use the local search link on that page.

Tomi Maila

unread,
Aug 16, 2013, 2:22:05 PM8/16/13
to flow-based-...@googlegroups.com
Dataflow intermediate representation provides an abstraction layer detatching the visual presentation and the visual editor from the underlying domain specific language. It is similar to the domain specific language in NoFlo. Low-level virtual machine (LLVM) is a somewhat widely used framework for binary code generation providing programming language agnostic abstraction layer for code generation targeting arbitrary hardware targets. 

Paul Morrison

unread,
Aug 16, 2013, 8:39:28 PM8/16/13
to flow-based-...@googlegroups.com
I took a Smalltalk course, and if you are saying that what they called "message passing" was really a bunch of (encoded) synchronous calls, that was certainly correct for the version of Smalltalk that I used.  I am still uncertain whether this was a deliberate con job, or whether they simply didn't realize that, most of the time, you don't have to "return".  I do remember my disappointment the first time I saw one of their so-called "collaboration diagrams" and saw all the "return" arrows!

Paul Morrison

unread,
Aug 16, 2013, 10:44:19 PM8/16/13
to flow-based-...@googlegroups.com
I assume these are Ptolemaic epicycles...?!  Interesting sequence of logic... 

I see call-return slightly differently - I believe I read once that Ada Lovelace came up with subroutines before computers existed...  If you have a single instruction counter, as the basic computer model had, then subroutines came up naturally as a way of clumping function - and, if we remember that the main early use of computers was for mathematical computations, then it made a lot of sense to have subroutines like sin, cos, sqrt, etc. And this would generalize in such an environment to subroutines like summing across an array, find the max/min, etc.  Unfortunately many business functions are extremely difficult to express as subroutines - but we were told "computers can do everything", so, obviously, if we found such functions difficult to write, it was our fault, not the fault of the paradigm!  So people were essentially brain-washed not to consider alternatives -- especially not solutions which would question the basic von Neumann paradigm.

Paul Wareham

unread,
Aug 17, 2013, 11:41:37 AM8/17/13
to flow-based-...@googlegroups.com
Paul -

Just curious about what you mean by 'epicycles'.  Do you basically mean procedural loops within loops or something deeper?

pw


On Thursday, August 15, 2013 5:29:47 PM UTC-3, Paul Tarvydas wrote:

John Cowan

unread,
Aug 17, 2013, 11:52:16 AM8/17/13
to flow-based-...@googlegroups.com
Paul Wareham scripsit:

> Just curious about what you mean by 'epicycles'. Do you basically mean
> procedural loops within loops or something deeper?

I take it that he means "complications added to a theory to patch up
holes in it."
[P]olice in many lands are now complaining that local arrestees are insisting
on having their Miranda rights read to them, just like perps in American TV
cop shows. When it's explained to them that they are in a different country,
where those rights do not exist, they become outraged. --Neal Stephenson

Brad Cox

unread,
Aug 17, 2013, 12:00:11 PM8/17/13
to flow-based-...@googlegroups.com
On Aug 16, 2013, at 10:44 PM, Paul Morrison <paul.m...@rogers.com> wrote:

Unfortunately many business functions are extremely difficult to express as 
subroutines - but we were told "computers can do everything", so, 
obviously, if we found such functions difficult to write, it was *our*fault, not the fault of the paradigm!  So people were essentially 

brain-washed not to consider alternatives -- especially not solutions which 
would question the basic von Neumann paradigm. 

Hate to disagree with a friend but that's the point of finding subroutines and coroutines presented with *equal* emphasis in that early PDP reference manual. Both have been known for a very long time. Subroutines just became more widely known for some unclear reason; possisbly the hassle of managing more than one stack area.

Same observation more broadly.... *everything* was invented in the early 1970's or before. OOP's the best example but LabView is another (decade later). 

Paul Morrison

unread,
Aug 17, 2013, 10:32:43 PM8/17/13
to flow-based-...@googlegroups.com
I agree, John, that's what I meant by Ptolemaic epicycles...  That model just became more and more complex - and, as observations became more precise, it sort of crumbled under its own weight!


John Cowan

unread,
Aug 18, 2013, 1:43:14 AM8/18/13
to flow-based-...@googlegroups.com
Paul Morrison scripsit:

> I agree, John, that's what I meant by Ptolemaic epicycles... That model
> just became more and more complex - and, as observations became more
> precise, it sort of crumbled under its own weight!

In hindsight, the research program was hopeless from the start. There is
no way to superimpose circles in such a way as to generate a perfect ellipse.

--
John Cowan http://www.ccil.org/~cowan co...@ccil.org
Uneasy lies the head that wears the Editor's hat! --Eddie Foirbeis Climo

Paul Tarvydas

unread,
Aug 18, 2013, 9:53:49 AM8/18/13
to flow-based-...@googlegroups.com
On 17/08/2013 11:41 AM, Paul Wareham wrote:

PW,

Epicycles in the "pre-Copernican" astronomy sense.  With the implication that scientists (computer and astrophysical) are off in the weeds.

The Ptolemaic theory was that all astronomical objects revolved about the Earth in perfectly circular orbits.

As observational data became more accurate, the model was kludged by adding perfectly circular sub-orbits on top of the larger orbits.  They got up to 40-ish epicycles when Copernicus espoused the helio-centric model[1].

Kepler nuked the Ptolemaic model when he worked out the math[2] for a helio-centric model and elliptical orbits[3].

If you go back to my post, I claim that such epicycles are visible in the last 4 decades of comp sci evolution[4].

I argue that FBP-thinking strips off the epicycles and, as Bugs Bunny used to say, we should'a turned left at Albuquerque...

pt


[1] According to Arthur Koestler (The Sleepwalkers - a wonderful read, detailing the personalities of these historical characters - we all know people like this), Copernicus doubled the number of epicycles (90-ish), but claimed (through mistake or fudging) to have halved the number.

[2] using data stolen from Tycho Brahe

[3] Kepler immediately dumped the theory and continued to work on his concentric spheres model.

[4] The same with astro-physics, string theory, etc.  See "The Electric Sky", D. Scott.

David Barbour

unread,
Aug 18, 2013, 11:06:04 AM8/18/13
to flow-based-...@googlegroups.com
On Sun, Aug 18, 2013 at 6:53 AM, Paul Tarvydas <paulta...@gmail.com> wrote:
FBP-thinking strips off the epicycles and, as Bugs Bunny used to say, we should'a turned left at Albuquerque...

The thing with epicycles: they are just the way of things, a fact of life, difficult to recognize as a problem until you are privileged with the hindsight of having envisioned something better. (Paul Graham makes a similar observation in 'Beating the Averages', and calls it 'The Blub Paradox'.) You should be wondering what kind of epicycles FBP-thinking hasn't shaken.






Paul Wareham

unread,
Aug 18, 2013, 9:44:58 PM8/18/13
to flow-based-...@googlegroups.com


On Sunday, August 18, 2013 10:53:49 AM UTC-3, Paul Tarvydas wrote:
On 17/08/2013 11:41 AM, Paul Wareham wrote:


I argue that FBP-thinking strips off the epicycles and, as Bugs Bunny used to say, we should'a turned left at Albuquerque...


Perhaps the entire field of what we think of as 'programming' is a series of epicycles right from the genesis of the first programming language, all as a result of the main theory of solving complex problems with a time sequential instruction calculator?

Now that we have low-cost reconfigurable logic in the form of field programmable gate array chips, perhaps 'software programming' should be more direct configuration of logic in inherently concurrent ways?  Should our machines really only be able to do just one thing at a time?

pw

Brad Cox

unread,
Aug 20, 2013, 11:46:00 AM8/20/13
to flow-based-...@googlegroups.com
Coroutines in C; much as I did it in TaskMaster. Setjmp/longjmp.
http://www.yosefk.com/blog/coroutines-in-one-page-of-c.html
> --
> You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

Paul Tarvydas

unread,
Aug 20, 2013, 4:00:49 PM8/20/13
to flow-based-...@googlegroups.com
On 13-08-20 11:46 AM, Brad Cox wrote:
> Coroutines in C; much as I did it in TaskMaster. Setjmp/longjmp.
> http://www.yosefk.com/blog/coroutines-in-one-page-of-c.html
>
>
Interesting!

So, now I count three (3) ways to implement the innards of an FBP system:

1) Full-blown processes. (JPM, big-iron).

2) Co-routines with separate stacks (Brad, smaller iron).

3) State machines with "no" stack co-routines (stackless (iiuc) /
actually 1 stack, get-in-get-out-quick, bare metal).


I'd guess that I haven't clarified option (3) fully. So here goes...

A state machine can be considered to be a collection of bits of
straight-line code - the code to "step" the state machine based on its
current state (e.g. a case statement, but can be more elaborate when
dealing with hierarchical states).

The state machine performs one (and only one) step, each time an event
(input message) arrives.

During a step, the stack can be used for local variables, but, when the
step ends, the stack is cleared. No "local" state is saved on the
stack. Any saved state is stored in static variables (similar to
instance variables for objects).

In essence, the scheduler does a setjmp, and there's a longjmp at the
end of every piece of stepper code. Or, more efficiently (on modern
call-return hardware and languages), the scheduler calls the state
machine, and the state machine returns.

In this model (3):

- there is no need to preallocate stacks and save/restore the SP

- state machines must not be re-entrant (solved by providing a "busy"
flag for each state machine - if an input arrives while a machine is
"busy", the event is simply queued for later processing, at the end of
the step (or even later))

- machine steps must not contain long-lasting loops

- a "worker" thread (aka the scheduler) flits between state machines
(whose input queues are non-empty) and gives them cpu time, i.e. by
executing one "step".

As bizarre as the above restrictions sound, in practice they are quite
easy to live with.


[See also the final version in
http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html ]

pt

Reply all
Reply to author
Forward
0 new messages