| | |
do stuff------ |
1. You receive data to the json reader, which gets passed to the "do
2. "do stuff" can then check that json and decide to pass it along,
or just save the data and yield execution until the next time we
receive data from the json reader. When we have enough information,
it can pass it along to other components in the pipeline, clear our
saved data, and start all over.
Does this make sense?
The way you describe above, a cycle starts when the json reader gets a
but I can't depend on an external client starting each cycle. That's
thought the end of the system of components would have a component
in it's run loop, then maybe it could make an html request
to the json input component at the top? Might that work? The
json reader also would except requests from other possible clients
to, say, turn on a burner manually.
I guess that brings up another question, would it be my "json html
component that would issue a response to external clients? Your
components that ship with pypes seem to all write to disk, so I guess
I'm a little confused how that would work. That's alright though,
only been looking at pypes for a day!
Thanks again for a very good explanation.
After reading that, and looking at more source code, I believe it
would work for my project.
I am also looking for a framework to use at work. I am convincing my
manager of the
benefits of FBP, and am definitely taking a look at the
implementations in the python
world. I looked at kamaelia, briefly. I'll take another look. I
think I didn't look at
it too long because of a really shallow reason. The code doesn't look
very pep8ish. Not
a good criteria, I know. I've also had a look at PYF.
So loops are not possible with your framework. What about composite
components? I've been
looking around the pypes source code and examples looking for
something about this and haven't found
anything yet. It seems, from the book, to be an important feature for
And one last thing. The book describes options ports. It looks like
it would be pretty natural to have
an options port, and use the IP that comes from that port with the
set_parameter method of the
Component class. Right?
I've been following the conversation.
In my work, we've been doing prototypes and demonstrations using Pypes
as a component in that.
One of the things we had in our plans was the ability to instantiate
new Pypes servers out of composite components. That is, you have a
Master Pypes server with a large collection of Pypes components. When
you drag one of those composite components onto the workspace and
save, this instantiates a new Pypes server on the cloud that is then
hooked into the Flow network. Inputs to the the composite component
are routed to the new Pypes server and then the output is routed back
to its location.
Eric has mentioned before that the Pypes and Stackless are so
efficient that it would be hard justify doing this and he is right.
However, the customers we work for actually have a firehose of data
and have a difficulty analyzing it properly, if at all. So we are
actively investigating these possibilities.
Currently I am now working with "live data" as an input to our
workflows. I have hooked up to the Chicago bus system which
generously provides a REST API to query the GPS locations and headings
of all their buses at any time of day. They usually have about 100
buses at night and upwards of 1200 buses running during the day.
The point of this exercise isn't necessarily to analyze the bus
system, but how do you deal with different rates of data input and how
to integrate live data and static data workflows. For instance, the
bus data retrieves an update every 3 minutes which is slow. However,
some data we would like to connect to has an update rate of less than
a second. This presents very interesting scenarios.
Btw, I have been looking at PyF to see how it compares with Pypes, but
I am still working on the setup unfortunately.