Generic Trigger Sm 50

0 views
Skip to first unread message

Silvina Spindler

unread,
Aug 4, 2024, 8:22:26 PM8/4/24
to olbalibol
Allof them fail miserably; overlooking the mapping of, and transforming of data between 3rd party services. Frequently, these 3rd party services have required specific data, types, formats, etc. that make all but their most basic, useless, interactions impossible (I'm talking about Zapeir and IFTTT). Pointless for the real world.

Workato - You guys have pretty much nailed that "killer feature", the one that actually makes these services viable for real modern businesses. I learned your system in the matter of an hour. Awesome.


There is one GLARING hole in your product. And I say glaring, because it is your Achilles heal. It's usefulness depends on "your" insight into which 3rd party services to add, and what triggers and actions of theirs to implement. Not to mention their is a whole wide custom application world out there that needs data to come in and out of it.


Now... I'm a new user evaluating the product... I may have missed something. Take that for what it is, and this is only positive criticism. I love what you are doing, it just doesn't meet "all" of our needs yet. But, I'm more than a little surprised at the lack of "generic" application triggers and actions. These are the most valuable. They should have been first. The Internet for the (most part) is based on open standards like JSON, XML, HTTP, OAUTH, etc.


Building these integrations would both allow the community to contribute and innovate new recipes with real additive value to your service and organization, while not hobbling power users that have a business use cases and requirements for using obscure service A or API call B.


I just wanted to let you know that this is in our pipeline and something that we truly believe in as well. Workato is all about the community and sharing. Apart from being able to share integration recipes that can help solve the greater integration problem, we believe that it is also important for people to also be able to contribute towards building new applications or new triggers and actions on Workato. We are expecting an early version of this to be release early next year. No specific timeline at the moment. But rest assure, our engineering team is working on this.


Firstly, we are currently building a connector SDK that is plan for release early next year. The connector SDK will allow our partners to create simple connectors to well defined APIs with minimal effort. They will be able to use it for their own use case (account level visibility). To make these connectors public, we will work with the partner to deploy, maintain and support it.


As for the request to make REST calls without creating a connector i.e. a generic REST connector, we do think that this can be useful for the more technically inclined and we want to do this. However, we do not have any concrete plans at the moment. We are hoping that the connector SDK will help alleviate some of these requirements in the short term.


But be aware that this can be triggered on any and all OH events: thing status changes, channel events, rules running or going to idle, Items being created, loaded, system runlevels, etc. Bring up the developer sidebar and open the event stream. Everything you see there can be used to trigger a rule.


You've got the general idea from your answer. Dynamic Apex and Dynamic SOQL are the best way to reflect on SObject's and as you've discovered you can determine the type easily. I would ensure however your method takes a list of SObject so it can implement bulkfication.


However you will not then know the event being handled from within the createTasks method. You can either pass this as an additional parameter or to make your Trigger even simpler and your method handle future events without modifying the signature. You could do something like this...


More Advanced: If you wanted to get more sophisticated, instead of the if/else approach in the handlers, you could develop this idea out as an Apex base class. Extending from other classes and overriding base class methods to specialise for the specific objects you want to support. Implementing a factory pattern to register the derived classes for the triggerHandler method to instantiate via Type.newInstance automatically based on the object type.


Further Info: This is simpler example on a number of broader Trigger patterns I've seen, including one of my own. If you want to take a broader look at them, I've listed the ones I'm aware below, others may have their preferences. Most I've seen should allow you to map a single trigger handler class to an object, but none will workaround the need for a distinct trigger per object, the best we can do is minimise the logic in each trigger. Note that these do vary on features and overhead.


I have a definition where iterations (controlled via Anemone) occur (or not) depending upon a variety of conditions (via Gates/Filters/etc). Some iterations feed Kangaroo with data and some not. Most Kangaroo related data are prepared via Anemone Loops.


Anemone is designed to start via trigger events (signaled most commonly by value alterations via sliders or yes/no values via "toggle" components etc etc). When nested, the change of the parent Loop counter triggers the slave Loop sequence etc etc.


The questions is : is there any way for a component (say pull points on curves) to propagate ("emit" some signal) the fact that has finished doing his job? That way one could avoid using delay components and some sort of automation could been achieved.


I think in the case of the original poster, I believe a simple "list length" component with the input flattened can serve as the trigger - this is a trick I use often to cause one component to execute after another. Does that help?


Also, the following function, when included in a trigger, will return the columns of a table that were updated. All you need to pass it is the table name and the COLUMNS_UPDATED() system variable. Use the function in a WHERE clause as if it were a table to create some dynamic SQL to do the rest.


Hi Jeff, I have just implemented it, but (forgive my ignorance), How do I actually link into it the old and new values of the changed items. The other thing is that this will fail if I update more than one row at once. What would be the best way to handle this ? (While loop for each record in the inserted / deleted table ?). My trigger as it stands looks as follows...


Also, you might want to dig into Books Online in the System Functions area... there you will find things like User_Name, Session_User, and Host_Name()... these could be used as defaults/formulas in your AuditHeader table to capture the information you require for UserID (ISNULL(Session_User,User_Name)) and UserPC (Host_Name()). No need to have that explicit code in the header code. Same goes for ChangeDateTime... use GETDATE() for the default on the column in the table.


... the way you had it, it would return the Columns_Updated() value more than once if there were more than one record in the Inserted table. It's a single value so as is Scope_Identity() so there is no need for the FROM clause.


You will, indeed, need a loop, but not row by row from the Inserted or Updated tables. Instead, loop through the column names provided by the udf_auditColumnsUpdated function (That's why the table output of the function has a RowNum in it). This will require some dynamic SQL to correctly insert records into AuditDetail based on the columns Deleted (you set @ColumnsUpdated to 0xFFFFFFFF), Inserted (system sets @ColumnsUpdated to all columns), and Updates (systems sets @ColumnsUpdated to just those columns updated).


Of course, that dynamic SQL will need to join to the Inserted/Deleted tables but the dynamic SQL is actually in a different scope and won't be able to see them. Soooo.... you will need to create a temp table for each. Since these triggers handle a relatively small number of records, it would probably be OK to use SELECT/INTO in this case to allow the code to be portable between tables.


To make life a bit easier and safer for copying the trigger in the future, I recommend you add one more variable at the very beginning of the trigger to store the table name. That way, you only need to change the code in one place for each table.


Thanks to your help Jeff, I am almost there. It audits changes made to multiple rows correctly, but when I update a single row and single field, it does not work for one of the fields (which happens to be the last field in the table). Single row for other fields and multiple values for a single / multiple rows work correctly. Any suggestions.


William is correct... if you update a column to the same value, it will register as being updated even though it has the same value... it's up to your code to compare Inserted/Deleted values for changes... then, there's that NULL thingy...


I have the same problem as my previous version of the trigger - If I update a certain field in the table (which incidentally happens to be the last field) and no other fields at the same time, the second part of the trigger does not seem to fire (Inserting into the AuditDetail table). Any Ideas here. I tried writing into the header the columns changed value (casted to a varchar) in my version. When it worked I saw a Square character in the field. When it didn't work, there was no square written. If I update more than one column and include the problematic column in the update, it works fine. My original thought was it was the UDF that was the culprit, but I am not sure as I go a bit cross-eyed while reading through the UDF. Any suggestions most welcome.


To replicate the workflow we have now, I need to be able to sync data back and forth between GitLab and HubSpot. In order to keep support and developers from having to bounce between tools, we like to sync the communication between tools. i.e., when someone adds a note to a ticket in Hubspot, it should get added to the corresponding issue in GitLab. Same goes for emails to/from the customer.

3a8082e126
Reply all
Reply to author
Forward
0 new messages