Is this possible with pypes?

23 views
Skip to first unread message

Craig Swank

unread,
Dec 2, 2010, 7:35:25 AM12/2/10
to pypes
Hello,
I'm about a third of the way through Morrison's book, and am all fired
up about FPB. I got an instance of Pypes up and running and really
like it. Now my brain is churning wondering if something like this is
possible with this framework. The simplest case:



json reader ------------- do stuff
------------- delay

| |

| |

-------------------------------------------------

An IP is started with the json reader, stuff happens to it, goes to
delay, then back to json reader. If an http request comes in, it is
sent to json reader, then the IP gets modified next time it comes to
back from delay. The IP goes looping on forever.


I have a project that I am working on for controlling a beer brewery,
and if the above is possible then I might be able to rebuild it with
pypes. It would be a vast improvement over what I have now, I think.

Thanks,

Craig

Craig Swank

unread,
Dec 2, 2010, 7:37:45 AM12/2/10
to pypes
Oops, the drawing got chopped off, it should be more like this:


json reader ------------- do stuff ------------- delay
| |
| |
-------------------------------------------------




Matt Weber

unread,
Dec 2, 2010, 1:12:43 PM12/2/10
to py...@googlegroups.com
Yes this is possible, but not quite like you are expecting. Since
pypes components use coroutines, basically pausing and resuming
execution, it is possible to maintain state. In your example, you
would have the json adapter which would then send the output to the
next component in the pipeline. This component can then decide what
to do with the json (store it or pass it along). There is no concept
of a delay in pypes, but you get the same thing because pypes is idle
until we receive more data on the input ports of the adapeters (json
reader in this case). Knowing this you will have something like:


json reader
|
| _______
| | |
do stuff------ |
|
|
output


1. You receive data to the json reader, which gets passed to the "do
stuff" component
2. "do stuff" can then check that json and decide to pass it along,
or just save the data and yield execution until the next time we
receive data from the json reader. When we have enough information,
it can pass it along to other components in the pipeline, clear our
saved data, and start all over.


Does this make sense?


Thanks,
Matt Weber

--
Thanks,
Matt Weber

Craig Swank

unread,
Dec 2, 2010, 3:32:32 PM12/2/10
to pypes
Yes, that does make sense, and thanks for the answer. The reason I
was asking
about a delay, is because my current system has a bunch of objects
that
control or read real physical devices, like solenoid valves, electric
burners,
and thermometers. I need to ensure that the thermometer checks the
temperature
every second, then the other devices can respond to temperature
changes. Solenoid valves
can respond to how full the tank is, etc. I'm hoping
I can build something like this, where the three way split is because
I have three
tanks, and they all have thermometers, burners and more (not shown
here):

json reader
|
|
|
splitter ________________
| | |
| | |
| | |
thermometer thermometer thermometer
| | |
| | |
| | |
burner burner burner
| | |
| | |
| | |
merger___________________
|
|
|
json html output (so external clients, like my iPhone client, can see
what't going on with the brewery)

The way you describe above, a cycle starts when the json reader gets a
request,
but I can't depend on an external client starting each cycle. That's
why I
thought the end of the system of components would have a component
that
has a

time.sleep(1)

in it's run loop, then maybe it could make an html request
to the json input component at the top? Might that work? The
json reader also would except requests from other possible clients
to, say, turn on a burner manually.

I guess that brings up another question, would it be my "json html
output"
component that would issue a response to external clients? Your
output
components that ship with pypes seem to all write to disk, so I guess
I'm a little confused how that would work. That's alright though,
because I've
only been looking at pypes for a day!

Thanks,

Craig

Craig Swank

unread,
Dec 2, 2010, 7:15:01 PM12/2/10
to pypes
I was out walking the dog an realized the http response part of that
question was dumb. Strike that from the record.

Eric Gaumer

unread,
Dec 2, 2010, 8:03:32 PM12/2/10
to py...@googlegroups.com
On Thu, Dec 2, 2010 at 3:32 PM, Craig Swank <craig...@gmail.com> wrote:

The way you describe above, a cycle starts when the json reader gets a
request,
but I can't depend on an external client starting each cycle.  That's
why I
thought the end of the system of components would have a component
that
has a

time.sleep(1)

in it's run loop, then maybe it could make an html request
to the json input component at the top?  Might that work?  The
json reader also would except requests from other possible clients
to, say, turn on a burner manually.

I guess that brings up another question, would it be my "json html
output"
component that would issue a response to external clients?  Your
output
components that ship with pypes seem to all write to disk, so I guess
I'm a little confused how that would work.  That's alright though,
because I've
only been looking at pypes for a day!


Pypes is optimized for large data sets and was designed to process large amounts of data in an event driven nature. It's based on Morrison's Flow-Based programming but only implements a subset of FBP and does not support loop-type networks - http://gaumer.blogspot.com/2009/08/loop-type-networks-in-pypes.html

With that said, you can "pull" data in providing your not pulling in large amounts. For instance, you can fetch some RSS data or web pages but you can't (shouldn't) pull in a huge table of data from a database.

The reason being, pypes runs components in order using a topological sort of the network - http://gaumer.blogspot.com/2009/09/pypes-topological-scheduler.html - It does this cooperatively rather than preemptively (i.e., the scheduler will never preempt you, you're component "yields" when it's done doing what it needs to do).

The HTML layer in pypes actually runs in a different process than the components. Data arrives through a HTML request and is queued in a multiprocessing queue. Pypes can be configured to run a instance of your graph/network on each CPU/core so data can be processed in parallel. Each of these processes pulls data packets off the queue in a round-robin fashion.

You could push data back into the system from a component by using an HTTP POST/PUT. If you want a delay, you can call sleep in one of your components (adapter or publisher I suppose). When you wake, yield and as long as data is waiting on the multiprocessing queue, the system will pull it off. If no data is waiting, the system will block waiting for more input.

So you can create a infinite "loop" by having your publisher send its packet back into the system through the HTTP interface:

Adapter --> Component1 --> Component2 --> Component3 --> ComponentN --> Publisher --> (back to Adapter via HTTP)

Then you can sleep in your Publisher, wake, and yield control.

Components can do whatever they need. They can write to disk but they can also send data to other services and/or devices. We send a lot of data to search indexes and data stores, using pypes as a middleware component.

You might want to check out Kamaelia - http://www.kamaelia.org - which supports a looping construct and uses a preemptive scheduler. It might be a better fit for your problem.

I'll be glad to help you with pypes if you think it's the right tool but you should survey the landscape a bit.

-Eric


Craig Swank

unread,
Dec 3, 2010, 9:17:41 AM12/3/10
to pypes
Thanks again for a very good explanation.

After reading that, and looking at more source code, I believe it
would work for my project.

I am also looking for a framework to use at work. I am convincing my
manager of the
benefits of FBP, and am definitely taking a look at the
implementations in the python
world. I looked at kamaelia, briefly. I'll take another look. I
think I didn't look at
it too long because of a really shallow reason. The code doesn't look
very pep8ish. Not
a good criteria, I know. I've also had a look at PYF.

So loops are not possible with your framework. What about composite
components? I've been
looking around the pypes source code and examples looking for
something about this and haven't found
anything yet. It seems, from the book, to be an important feature for
FBP.

And one last thing. The book describes options ports. It looks like
it would be pretty natural to have
an options port, and use the IP that comes from that port with the
set_parameter method of the
Component class. Right?

Craig

On Dec 2, 6:03 pm, Eric Gaumer <egau...@gmail.com> wrote:
> On Thu, Dec 2, 2010 at 3:32 PM, Craig Swank <craigsw...@gmail.com> wrote:
>
> > The way you describe above, a cycle starts when the json reader gets a
> > request,
> > but I can't depend on an external client starting each cycle.  That's
> > why I
> > thought the end of the system of components would have a component
> > that
> > has a
>
> > time.sleep(1)
>
> > in it's run loop, then maybe it could make an html request
> > to the json input component at the top?  Might that work?  The
> > json reader also would except requests from other possible clients
> > to, say, turn on a burner manually.
>
> > I guess that brings up another question, would it be my "json html
> > output"
> > component that would issue a response to external clients?  Your
> > output
> > components that ship with pypes seem to all write to disk, so I guess
> > I'm a little confused how that would work.  That's alright though,
> > because I've
> > only been looking at pypes for a day!
>
> Pypes is optimized for large data sets and was designed to process large
> amounts of data in an event driven nature. It's based on Morrison's
> Flow-Based programming but only implements a subset of FBP and does not
> support loop-type networks -http://gaumer.blogspot.com/2009/08/loop-type-networks-in-pypes.html
>
> With that said, you can "pull" data in providing your not pulling in large
> amounts. For instance, you can fetch some RSS data or web pages but you
> can't (shouldn't) pull in a huge table of data from a database.
>
> The reason being, pypes runs components in order using a topological sort of
> the network -http://gaumer.blogspot.com/2009/09/pypes-topological-scheduler.html- It
> does this cooperatively rather than preemptively (i.e., the scheduler will
> never preempt you, you're component "yields" when it's done doing what it
> needs to do).
>
> The HTML layer in pypes actually runs in a different process than the
> components. Data arrives through a HTML request and is queued in a
> multiprocessing queue. Pypes can be configured to run a instance of your
> graph/network on each CPU/core so data can be processed in parallel. Each of
> these processes pulls data packets off the queue in a round-robin fashion.
>
> You could push data back into the system from a component by using an HTTP
> POST/PUT. If you want a delay, you can call sleep in one of your components
> (adapter or publisher I suppose). When you wake, yield and as long as data
> is waiting on the multiprocessing queue, the system will pull it off. If no
> data is waiting, the system will block waiting for more input.
>
> So you can create a infinite "loop" by having your publisher send its packet
> back into the system through the HTTP interface:
>
> Adapter --> Component1 --> Component2 --> Component3 --> ComponentN -->
> Publisher --> (back to Adapter via HTTP)
>
> Then you can sleep in your Publisher, wake, and yield control.
>
> Components can do whatever they need. They can write to disk but they can
> also send data to other services and/or devices. We send a lot of data to
> search indexes and data stores, using pypes as a middleware component.
>
> You might want to check out Kamaelia -http://www.kamaelia.org- which

Eric Gaumer

unread,
Dec 10, 2010, 1:13:04 PM12/10/10
to py...@googlegroups.com
On Fri, Dec 3, 2010 at 4:17 AM, Craig Swank <craig...@gmail.com> wrote:
Thanks again for a very good explanation.

After reading that, and looking at more source code, I believe it
would work for my project.

I am also looking for a framework to use at work.  I am convincing my
manager of the
benefits of FBP, and am definitely taking a look at the
implementations in the python
world.  I looked at kamaelia, briefly.  I'll take another look.  I
think I didn't look at
it too long because of a really shallow reason.  The code doesn't look
very pep8ish.  Not
a good criteria, I know.  I've also had a look at PYF.

So loops are not possible with your framework.  What about composite
components?  I've been
looking around the pypes source code and examples looking for
something about this and haven't found
anything yet.  It seems, from the book, to be an important feature for
FBP.

And one last thing.  The book describes options ports.  It looks like
it would be pretty natural to have
an options port, and use the IP that comes from that port with the
set_parameter method of the
Component class.  Right?


 
Sorry for the delayed response. I'm in Hawaii so I've been neglecting my email a bit ;-)

Matt gets the credit for the PEP8 stuff in Pypes. He re-factored a lot of code to be PEP8 compliant.

When I began to design pypes, I envisioned targeting something bigger than "applications". I had been working as a search architect for several years. I envisioned an SOA that didn't suck. I liked web services a lot, specifically REST, because I think HTTP has the most promise. It's not a silver bullet but HTTP is generally a good protocol that offers a lot of flexibility. And let's face it, the Internet is the most impressive piece of technology our generation has seen.

I've actually had several conversations with Paul Morrison about "moderninizing" FBP. The concepts are great but some aspects are dated. We've also had conversations about how Flow-Based semantics should be part of the underlying language. My argument is that languages like Erlang are better suited for FBP because they offer inherent message passing semantics.

Conversely, the system has to be usable and it's probably unreasonable to expect users to learn a language like Erlang in order to use the system. It's a depressing thought but sometimes the most sound "technical" solution loses to an inferior solution that's more "approachable" by the masses.

Python seemed like a good compromise. It's a popular language with a broad range of support libraries. It's also great at text/data processing which is important to any Flow-Based system. In order to get inherent message passing semantics, we decided to use Stackless Python. This hasn't been a barrier for us and I typically package a stackless interpreter with pypes for enterprise clients. We've definitely got out eye on PyPy.

So with that little bit of background, let me address your questions.

In terms of composite components, Morrison's ideas are tied to performance of hardware at the time as well as languages of that era. Pure message passing systems like Erlang and Stackless Python can run hundreds of thousands of processes at almost no cost. There's no loading and unloading of code that needs to be done. We can instantiate huge networks on modern hardware at very little cost.

In terms of stepwise decomposition, I see value in this but I disagree with Morrison's approach. Again, the idea is dated. I view these composite components or "subnets" as individual web services that can be chained together to form elaborate architectures that span departmental or even geographic boundaries.

In this sense, elementary components are assembled to form networks. These networks are then assembled to form larger more sophisticated networks. A "black box" subnet then becomes a web service that speaks HTTP. This means that my network can interact with some other service, perhaps not even flow based, as long as that service speaks HTTP. It could be written in any language, it really doesn't matter.

This is where things like webhooks come into play: http://wiki.webhooks.org/w/page/13385124/FrontPage

It's about building a programmable web and pypes is really geared toward web based architectures and data exchange.

Think about your typical SOA architecture today. You have a master process that calls N services passing in some piece of data. It passes the data back and forth to each service and finally persists it somewhere. Why not chain those services together in a pipeline? The output of one service becomes the input to another. That last service specifically persists the data for you.

If you need/want to create composite components, think of them as separate web services and move data between then using HTTP. This will be much simpler and provide much greater flexibility. Of course any design comes with tradeoffs and this case, latency needs to be considered. If latency is a concern, then build this as a single network comprised of "local" elementary components :-)

As for option ports, messages should be self contained. I think passing in options or arguments separate from the message is a bad idea. It's just too rigid. The Packet interface in pypes supports meta-data at both the packet and field level. For instance I could have a packet with a meta field describing the main language as English. At the same time I can have a specific field (e.g., summary) that uses a different language. In this case, that field can have a meta field describing the language as French. This is a common scenario and there are others.

Bottom line is that "options" are really nothing more than meta-data (data that describes data). This approach is much more flexible than passing in meta-data on additional ports.

With that said, you can add additional ports and pass in any information you want, providing your components all understand how to deal with that data. You might just decide to create a packet level meta-field called "options". Meta fields are a list of key value pairs (dict) so you can pass an unlimited and dynamic number of options through that mechanism.

In order to generalize a component, pypes does offer set/get parameter interface. So for instance, if my component writes to Solr, I can generalize it by providing a user defined parameter that allows then to specify the host and port, etc.

Hope this helps.

-Eric

Craig Swank

unread,
Dec 11, 2010, 1:13:46 PM12/11/10
to py...@googlegroups.com
No worries on the delay.  I wouldn't be responding to anything from Hawaii.

I'm glad to hear some of your critiques of Morrison's FBP.  I am still in the middle of the book, but when his examples involve offsets and streams of characters for packets, I started wondering if I wanted to continue.

I like your idea of composite components being replaced by a web service (which is an instance of pypes).  It seems like it would be nice if you could take your web interface, connect some components, and then instead of starting it up as a web service, click a 'make composite component' button, then the group of components re-appears as a single box, and the new component appears as a choice on the left side, probably under a new category called 'composite.'  Just an idea.

That being said,  my brewery project (see https://github.com/cswank) has always been about me trying new things that I learn.  So last weekend I made my own fbp framework that does exactly what I want it to do.  It's main purpose is not to do massive data processing, and do it efficiently.  I wanted it to be more about allowing other beer brewers who are not programmers to be able to take a web interface like yours and be able to make their own brewery and control it.  No two home breweries are set up exactly the same, so it must be very easy to configure your own application.  When I saw your web interface I knew it would be perfect for what I want to do.  However, the ability to loop the data going through is very important, and also I want the installation to be very easy for non-programmers, and for me that means I would like it to run on plain python, so that I why I made my own fbp framework.  It is based on python's multiprocessing, with each component being a subclass of multiprocessing.Process.  The components communicate with clients from multiprocessing.connection.  That means that message passing is probably inefficient compared to real FBP implementations.  My packets must get pickled, transmitted through ports, then un-pickled.  For what I want to do, though, that is no problem at all.  I haven't released it yet.  I don't have a web-interface for it, and I was wondering if you would be offended if I jacked a bunch from what you've done for yours.

Craig

PS.  Does Morrison agree with your opinion that some of his ideas int the book are dated?

Eric Gaumer

unread,
Dec 12, 2010, 12:23:13 PM12/12/10
to py...@googlegroups.com
Sweet.

Paul and I strongly disagree on certain topics but a lot of that is driven by past experiences. Given the large generation gap, it's to be expected. I took a lot of FBP concepts and interpreted them to fit my needs. Our biggest disagreement was regarding language. He strongly believed that Java was more readable and more elegant than Erlang with respect to FBP. My argument was that he couldn't provide a unbiased perspective because he hadn't written any Erlang code (of course it's going to seem unreadable to him).

Which book are you reading? He just published a second edition.


I haven't bought the book but he sent me a copy since it contained some quotes of mine.

Feel free to use anything you need for your UI.

-Eric

Jacob Everist

unread,
Dec 12, 2010, 11:32:51 PM12/12/10
to py...@googlegroups.com
Hi guys,

I've been following the conversation.

In my work, we've been doing prototypes and demonstrations using Pypes
as a component in that.

One of the things we had in our plans was the ability to instantiate
new Pypes servers out of composite components. That is, you have a
Master Pypes server with a large collection of Pypes components. When
you drag one of those composite components onto the workspace and
save, this instantiates a new Pypes server on the cloud that is then
hooked into the Flow network. Inputs to the the composite component
are routed to the new Pypes server and then the output is routed back
to its location.

Eric has mentioned before that the Pypes and Stackless are so
efficient that it would be hard justify doing this and he is right.
However, the customers we work for actually have a firehose of data
and have a difficulty analyzing it properly, if at all. So we are
actively investigating these possibilities.

Currently I am now working with "live data" as an input to our
workflows. I have hooked up to the Chicago bus system which
generously provides a REST API to query the GPS locations and headings
of all their buses at any time of day. They usually have about 100
buses at night and upwards of 1200 buses running during the day.

The point of this exercise isn't necessarily to analyze the bus
system, but how do you deal with different rates of data input and how
to integrate live data and static data workflows. For instance, the
bus data retrieves an update every 3 minutes which is slow. However,
some data we would like to connect to has an update rate of less than
a second. This presents very interesting scenarios.

Btw, I have been looking at PyF to see how it compares with Pypes, but
I am still working on the setup unfortunately.

Jacob

--
Jacob Everist

cell:  310-425-9732
email:  jacob....@gmail.com

Eric Gaumer

unread,
Dec 18, 2010, 4:41:47 AM12/18/10
to py...@googlegroups.com
One of our core requirements when designing pypes was scalability. Our common use case involved "processing" tens of millions of documents (hundreds of millions in some cases). Because of this, throughput became a focus which led to decisions to use stackless, HTTP, and multiprocessing. At the same time, we wanted something simple and easy to understand and I think stackless really helped achieve this. The decision to use stackless wasn't one we took lightly. Focusing on a non-standard interpreter has its drawbacks.

PyF uses a library called zflow: http://www.thensys.com/?page_id=21

We looked at this library before we had started work on pypes. Zflow uses generators (a common approach to dataflow programming in Python) which end up pulling data into the pipeline. Essentially you invoke the last component in your network which sends a request up the pipeline (in reverse) until it gets to the top level generator which does some sort of fetch operation (it somehow generates data packets).

This approach has advantages in terms of memory because you can efficiently handle large data sets by ensuring only a limited number of packets/records are produced at any given time. It's a classic producer/consumer problem where your producer only provides what the consumer can handle.

The drawback with this approach is scalability. It becomes difficult to scale this horizontally. With Python's poor threading due to the GIL, scaling vertically is also problematic. In terms of horizontal scalability, when you have some component that generates data (e.g., reads records from a database, files from a file system, etc.), scaling out by adding new nodes means you have to manage access to the data across multiple nodes. The target data must therefore be partitioned in some way much like map/reduce frameworks do.

With pypes we knew from day one that we would require multiple nodes in order to reach our target throughput. By choosing to push data in (essentially decoupling the "feed" process), we  knew we could scale the "producer" independent from the "consumer". In this way, we could build producers to feed data as fast as we could (threading, multi-node, partitioning, etc.) and then simply add enough consumers to cope with the incoming rate of data.

In cases where the data is in a database, producers tend to be slower due to the nature of sequential database reads. In the case of other types of data streams (file system resources or data streams like twitter), it becomes as easy as adding more pypes instances to cope with the amount of incoming data.

I don't think there is a right or wrong approach (push vs pull) but each has it's tradeoffs. With that said, there are a number of new features sitting in the latest tip of development for pypes.

I've been working on a customer project for a client where one of the end points (publishers) was Riak. We need to extend pypes to be able to handle things like deletes and updates. Initially we had no requirement for sending operational codes into the pipeline (everything was an "add" or "create" operation). We also had no need for "routing" information (i.e., where to send a packet on any particular end point). 

With Riak for instance, data is partitioned into "buckets" or namespaces. Incoming data might need to go to different buckets based on content (i.e., content based routing) or some static classification or rule. You could have added routing information to the packet data on the producer side of things but I wanted something more flexible and more universal.

Routing is now handled via RESTful URLs. With this new improved controller, you can also send in raw strings of data. The HTTP verb is then added as packet meta-data to serve as an operations code (i.e., create, read, update, delete) allowing components to decide on how certain operations are handled. Up until this point, doing an HTTP PUT, DELETE, or GET would have been silently discarded by the controller and the pipeline would have never seen the associated data. The new controller now passes these requests into the pipeline as legitimate packets.

A few examples:

# test GET (triggers/generators)
curl -X GET -H 'Content-Type: application/json' http://localhost:5000/data
curl -X GET -H 'Content-Type: application/json' http://localhost:5000/data/route
curl -X GET -H 'Content-Type: application/json' http://localhost:5000/data/route/id

# test PUT (requires route and id)
curl -X PUT -H 'Content-Type: application/json' http://localhost:5000/data/route/id -d '{"id":"1","title":"test"}'

# test POST with/without routing and id info (route and id are optional)
curl -X POST -H 'Content-Type: application/json' http://localhost:5000/data -d '{"id":"1","title":"test"}'
curl -X POST -H 'Content-Type: application/json' http://localhost:5000/data/route -d '{"id":"1","title":"test"}'
curl -X POST -H 'Content-Type: application/json' http://localhost:5000/data/route/id -d '{"id":"1","title":"test"}'

# test multipart/form-data with and without routing info (route and id are optional)
curl -F document=@dat.json localhost:5000/data
curl -F document=@dat.json localhost:5000/data/route
curl -F document=@dat.json localhost:5000/data/route/id

# test gzip compression for files (defaults to POST and route and id are optional)
curl -F document=@dat.json.gz -H 'Content-Encoding: gzip' localhost:5000/data
curl -F document=@dat.json.gz -H 'Content-Encoding: gzip' localhost:5000/data/route
curl -F document=@dat.json.gz -H 'Content-Encoding: gzip' localhost:5000/data/route/id

# test deprecated /docs controller (with/without gzip)
curl -F document=@dat.json localhost:5000/docs
curl -F document=@dat.json.gz -H 'Content-Encoding: gzip' localhost:5000/docs

# test DELETE (route and id are required)
curl -X DELETE -H 'Content-Type: application/json' http://localhost:5000/data/route/id

The new controller still included routes for the old "docs" URL but the new "data" URLs provide much more flexibility. For instance, imagine one of your publishers is Riak; sending an HTTP DELETE request will cause a packet to be created (containing just a few meta-data fields) that is passed down through the pipeline/network. Any component can check the method type and act accordingly. This makes most sense in your publishers which can now detect that the user wants to delete a resource.

Many of these requirements have come from the idea of using pypes as a way of providing asynchronous web services. For instance imagine a slow database where write latency is higher due to the transactional nature. You can stick a pypes instance in front it as a sort of middleware. Your processes can now write asynchronously to pypes so your requests return immediately while pypes takes cares of handing the actual database writes.

As far as read operations, there are several ways to handle this asynchronously but webhooks/callbacks seem most obvious. In truth, the GET operation was intended more to be a trigger. This comes from discussions with Jacob and the work he was doing.

There are times when you simply want an adapter to fetch some piece of data (i.e., do a "pull") such as an RSS feed. The only way to execute the pipeline is to send in data but with support for GET, you can now use this as a trigger.

You can write an adapter that, for example, fetches an RSS feed. When you want to execute the pipeline/network, just do a GET request. This will cause the pipeline to begin execution and you adapter will be called to run. It can then fetch the required data and send it on.

The new controller offers more flexibility by being very general. It doesn't define how a GET request must be handled, only that it occurred. You can choose to interpret that as a trigger or use it to build some sort of asynchronous web service that actually fetches data from some end-point and uses webhooks to return  response.

On a last note, I've also added tooltips to the ports in the UI. If you hoover your mouse over any component's ports, a tooltip will appear showing the name of the port (whatever the component developer chose to name the port - e.g., in1, in2, out1, out2). The idea here was to be able to do more advanced routing and allow the user to identify which in/out port they are connecting a wire to/from.

Regards,
-Eric
 
Reply all
Reply to author
Forward
0 new messages