Very interesting summary, Paul!
Minor quibble - point 2: IMO we could not have built our systems using unbounded queues. a) there is no guarantee that data will ever be processed, and b) such systems could potentially use infinite resources!
Regards,
The other Paul
--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-programming+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
9. Try writing a mouse driver. Try writing a network protocol stack
(esp. using OSI 7-layer design). Try writing a GUI. Try writing an
operating system. Solutions for these problems can easily be expressed
using FBP-ish diagrams composed with decoupled components. You will,
though, find yourself thinking a bit more reactively when solving these
classes of problems.
10. My personal, general objective: can we make it so cheap that we can
use FBP for every line of code?
--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
Hi Ged,
Your last paragraph really rang a bell for me! I think I realized this years ago - even before FBP: forbid as little as possible, but make good practice easier and/or cheaper!
This is like Nate Edwards' method of getting hardware designers to use components - he told them they were free to build their own components but then they had to carry the cost of testing!
Thanks for an insightful article, Ged!
Regards,
Paul
An FBP graph is a functional program, but the separation between
components means that any non-functional parts are encapsulated
where they can do no harm. This is the reverse of Haskell, where
the imperative parts are at the top level and invoke the functional
parts as needed.
Very interesting summary, Paul!
Minor quibble - point 2: IMO we could not have built our systems using unbounded queues. a) there is no guarantee that data will ever be processed, and b) such systems could potentially use infinite resources!
Regards,
The other Paul
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
On Mar 16, 2015 6:35 PM, "Paul Tarvydas" <paulta...@gmail.com> wrote:
>
> On 15-03-15 05:21 PM, Paul Morrison wrote:
>>
>> Very interesting summary, Paul!
>>
>> Minor quibble - point 2: IMO we could not have built our systems using unbounded queues. a) there is no guarantee that data will ever be processed, and b) such systems could potentially use infinite resources!
>
>
> Hi Paul,
>
> Minor quibble with your minor quibble. I think that that means they cancel out and produce an unquibble.
>
> You *could not* have built your systems using unbounded queues and the very same system design.
>
> You *could* have built the system using unbounded queues if you had added one more line - a throttle. An explicit back-pressure line. Or, more likely, a "send next record" line.
Sorry, don't understand! Do you mean line in a diagram? If so, I want back pressure on every line, so why should I mark it? Of course, its easy to implement in cFBP. If it's expensive in your system, I can sort of see why you might want to use it sparingly , but I think that makes the mental model more complex...
What is a "send next record" line?
>
> I often work with protocols - it is important to me to show back-pressure explicitly on the diagram. You don't like to show back-pressure explicitly. I think it's a matter of taste... (or problem domain).
>
Do you have some concrete examples?
Regards,
Paul
--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-programming+unsub...@googlegroups.com.
On Mar 17, 2015 11:31 AM, "Paul Tarvydas" <paulta...@gmail.com> wrote:
> Think of a mouse driver written in FBP (that happens to be the kind of thing I program with FBP).
>
> As a user swipes the mouse, it produces a rapid sequence of mouse-move events. Let's say that the mouse produced 50 events. Your bounded queue is set to 25. What happened to the other 25 mouse-move events? The most important event is the last one - the final position of the mouse. Where did that one go?
>
> pt
Ah! I implemented this recently as a connection attribute on the cFBP implementations - I called it "dropOldest". I'm sure I posted this on the Group! :-) I guess for my purposes the default goes the other way, but I get your point.
Regards,
TOP (The Other Paul)
I didn't see your previous post - you're right, that would work. Actually these two solutions are clearly push and pull, respectively!
I just thought dropOldest would be generally useful, and it looks pretty in the diagram (I use a zigzag line)!
> --
We could take turns!
Regards,
One of the Pauls :-)
--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-progra...@googlegroups.com.
Shhh, don't tell Paul M I said this, it is *possible* (not recommended)
using one's own implementation of FBP to send pointers-to-objects as
IP's. In this case, the FBP components would be essentially scripts -
they receive an object, push a couple of buttons on the object and send
it along down the line. So, it is *possible* to mix the two paradigms
(OO and FBP).
Shhh, don't tell Paul M I said this, it is *possible* (not recommended)
using one's own implementation of FBP to send pointers-to-objects as
IP's. In this case, the FBP components would be essentially scripts -
they receive an object, push a couple of buttons on the object and send
it along down the line. So, it is *possible* to mix the two paradigms
(OO and FBP).
--
FBP takes a different approach, as described in Paul's chapter on control IPs and Checkpointing:
pt
--
You received this message because you are subscribed to the Google Groups "Flow Based Programming" group.
To unsubscribe from this group and stop receiving emails from it, send an email to flow-based-programming+unsub...@googlegroups.com.
What about a practical example. Here's one from Plan B software: http://blogs.planbsoftware.co.nz/?p=247"Imagine the following scenario for a leave approval system.* An employee applies for leave.* The request for leave must be approved by the HR manager* The request for leave must be approved by the employees line Manager.* If both the HR manager and the line manager approve the request it can then be authorized.* Both the HR manager and the line manager will make their decisions independently of each other."For this I would build a simple flow graph that would behave as follows:First a REQUEST_LEAVE component that receives a request IP. For every IP received two new IPs are created for HR and Line Manager approval. The original IP is sent to the COLLATE_REQUEST_APPROVAL. The new IPs are sent to the HR_APPROVE_LEAVE and LINE_MANAGER_APPROVE_LEAVE.These two components allow HR and the Line Manager to approve or reject the leave request.Their choice is sent to COLLATE_REQUEST_APPROVAL.COLLATE_REQUEST_APPROVAL receives from LEAVE_REQUEST, HR_APPROVE_LEAVE, LINE_MANAGER_APPROVE_LEAVE and a TIMER.When COLLATE_REQUEST_APPROVAL receives a request IP it creates an entry for it and calculates an escalation date for it based on configuration.When it receieves one of the approvals it updates the request IP and checks to see if it has both HR and Line Manager approvals yet.When both have been received it sends the request IP to either APPROVE_REQUEST or REJECT_REQUEST based on the Approval content.When it receives an IP from TIMER it sends any request IP that has passed it's escalation date to ESCALATE_REQUEST.
Why the arrows back to the approvals?
Regarding reading from the queues, do they have to block on an empty queue?
I started thinking to day that FBP and Kanban has a few things in common (any experience from some synergy there?). In this view the bounded queues would be equivalent to the columns of a kanban board. The typical flow is FIFO, but it's not a strict requirement. And there is an order to which queues are scanned for things to pull, but you don't block on an empty queue, instead you look for another queue to pull from to keep things moving (e.g. swarming).
So with that view in mind, instead of multiplexing IPs to a single port, how about just allowing the component to pull from any (connected) queue, how and when it pleases? (How would you f.ex. implement work stealing in an FBP network?)
Regarding state: If we keep with the physical metaphor of kanban, and sending notes around kanban boards. The collate component would be the guy with the stapler, keeping pending requests, and approval notices on his desk while processing them. In one sense it makes sense to see this desk as the database, but the state is kept _on_ the notices, so I'm thinking that the better pattern would be for some Active Record implementation to get orthogonal persistence. The IP is not only the information of the requests, it's also the handle used to update this information, and persist it. Thus the database isn't connected to the component, it's connected to the IP and follows it around the network.
Not sure about work stealing, but in real life I always find that one queue, multiple servers works best.
Tom
Young
47 MITCHELL ST.
STAMFORD, CT 06902
When
bad men combine, the good must associate; ...
-Edmund
Burke 'Thoughts on the cause of the present discontents' , 1770
Not sure about work stealing, but in real life I always find that one queue, multiple servers works best.I should have said: one queue, feeding a load balancer, feeding multiple instances of the server... In one case we reduced the elapsed time of a run from 2 hours to 20 minutes!