JPOS based application design/architecture doubts

141 views
Skip to first unread message

Nik

unread,
May 22, 2017, 9:29:11 PM5/22/17
to jPOS Users
Hi All,

I have been reading a lot about JPOS and thanks to some wonderful posts on this group and the programmer's guide, I have been able to understand the bits and pieces of JPOS. 

However, I still have some confusion related to transaction management in a production system. 

For the sake of simplicity, I will be asking questions with reference to the following system:

1) A JPOS Server running on port 9999.

2) A dumb ISO request listener which does nothing but the following:

                       
 ISOMsg respMsg = (ISOMsg) isoMsg.clone();
 respMsg
.setDirection(ISOMsg.OUTGOING);
 respMsg
.setResponseMTI();
 context
.put(Constants.REQUEST, isoMsg);
 context
.put(Constants.RESPONSE, respMsg);
 context
.put(Constants.SOURCE, isoSource);
 space
.out(queue, context, timeout);


Here, setConfiguration sets up the required timeout, space, queue, etc. 

3) A TransactionManager with a Selector which routes the incoming requests to appropriate Participants based on MTI:

selector = configuration.get(reqIsoMsg.getMTI());

4) Different Participants having PREPARE, COMMIT and ABORT methods to handle Transaction requests, Network requests, etc.

Use case: Client concurrently sends several Transaction ISOMsgs with AccountNumber and DebitAmount. A new database connection should be opened for each incoming request. The mentioned AccountNumber in the request should be debited by the DebitAmount and the transaction should be committed. Appropriate OK ISOMsg should be returned back. In case of insufficient fund, an error ISOMsg should be returned back.

With above usecase in mind, what will be the design of the application. 

In my mind, an application has incoming request flow as View -> Controller -> Service -> Dao -> Database and vice versa for the response back. 

Questions:

1) As per my understanding, TransactionManager creates new Participant instances for each incoming requests which run in their own threads with their own context. Is that true ?

2) Is this a good design ? ---> Factories invoked inside Participant's PREPARE which returns appropriate Service and DAO objects based on MTI. Once Service and DAO objects are at hand, call the appropriate methods inside them handing over information from incoming ISOMsg. Here I am assuming that since there will a new Participant for each incoming requests and since the Factory methods are invoked inside PREPARE of Participant, I will have independent Service, DAO Objects and Database Connections for each incoming request. Is this assumption correct ?

3) Which is a better design ?  :  (Opening, Executing, Committing, Rollingback Database transaction in DAO layer) vs. (Opening a DB Connection in PREPARE, calling each Service and DAO method with this Connection as a parameter, Committing the connection in COMMIT and RollingBack the connection in ABORT in case of any error).

In the first design, everything will happen inside PREPARE method itself. The PREPARE will return PREPARED which means the account will have been successfully debited or will return ABORT in case of any issue with DAO Transaction. I am unsure if this is the right way.

4) Any suggestion from experts on things I may have overlooked will be greatly appreciated. 

Thanks,
Nik

Alejandro Revilla

unread,
May 22, 2017, 9:59:29 PM5/22/17
to jPOS Users
Please see my comments inline:

2) A dumb ISO request listener which does nothing but the following:

                       
 ISOMsg respMsg = (ISOMsg) isoMsg.clone();
 respMsg
.setDirection(ISOMsg.OUTGOING);
 respMsg
.setResponseMTI();
 context
.put(Constants.REQUEST, isoMsg);
 context
.put(Constants.RESPONSE, respMsg);
 context
.put(Constants.SOURCE, isoSource);
 space
.out(queue, context, timeout);



​There's a new IncomingListener participant that does basically that, feel free to use it (org.jpos.iso.IncomingListener).

You may want to take a look at http://jpos.org/tutorials for a use case example.

Here, setConfiguration sets up the required timeout, space, queue, etc. 

3) A TransactionManager with a Selector which routes the incoming requests to appropriate Participants based on MTI:

selector = configuration.get(reqIsoMsg.getMTI());
 
4) Different Participants having PREPARE, COMMIT and ABORT methods to handle Transaction requests, Network requests, etc.

Use case: Client concurrently sends several Transaction ISOMsgs with AccountNumber and DebitAmount. A new database connection should be opened for each incoming request. The mentioned AccountNumber in the request should be debited by the DebitAmount and the transaction should be committed. Appropriate OK ISOMsg should be returned back. In case of insufficient fund, an error ISOMsg should be returned back.

With above usecase in mind, what will be the design of the application. 

In my mind, an application has incoming request flow as View -> Controller -> Service -> Dao -> Database and vice versa for the response back. 

Questions:

1) As per my understanding, TransactionManager creates new Participant instances for each incoming requests which run in their own threads with their own context. Is that true ?

​That's not correct. The TM instantiate a single instance for each participant. It uses the flyweight pattern.​ You can't use member variables in your participant implementation, that's the reason the prepare/commit/abort methods accept a Context, that's where you place your per-transaction variables.
 
2) Is this a good design ? ---> Factories invoked inside Participant's PREPARE which returns appropriate Service and DAO objects based on MTI. Once Service and DAO objects are at hand, call the appropriate methods inside them handing over information from incoming ISOMsg. Here I am assuming that since there will a new Participant for each incoming requests and since the Factory methods are invoked inside PREPARE of Participant, I will have independent Service, DAO Objects and Database Connections for each incoming request. Is this assumption correct ?


​No, you need to store your DAO objects and JDBC session pointers in the Context.

 
3) Which is a better design ?  :  (Opening, Executing, Committing, Rollingback Database transaction in DAO layer) vs. (Opening a DB Connection in PREPARE, calling each Service and DAO method with this Connection as a parameter, Committing the connection in COMMIT and RollingBack the connection in ABORT in case of any error).

If your participants need to call external systems that may be slow, then you need to avoid having an open JDBC session, if, on the other hand, everything is processed locally, then you can keep the sessions open and share it among participants.


Nik

unread,
May 22, 2017, 10:51:43 PM5/22/17
to jPOS Users
Hi Alejandro,

Thanks for the quick response. 

I have updated my comments inline below in orange.


On Tuesday, May 23, 2017 at 9:59:29 AM UTC+8, Alejandro Revilla wrote:
Please see my comments inline:

2) A dumb ISO request listener which does nothing but the following:

                       
 ISOMsg respMsg = (ISOMsg) isoMsg.clone();
 respMsg
.setDirection(ISOMsg.OUTGOING);
 respMsg
.setResponseMTI();
 context
.put(Constants.REQUEST, isoMsg);
 context
.put(Constants.RESPONSE, respMsg);
 context
.put(Constants.SOURCE, isoSource);
 space
.out(queue, context, timeout);



​There's a new IncomingListener participant that does basically that, feel free to use it (org.jpos.iso.IncomingListener).

You may want to take a look at http://jpos.org/tutorials for a use case example.

Thanks. Will look into IncomingListener.  

Here, setConfiguration sets up the required timeout, space, queue, etc. 

3) A TransactionManager with a Selector which routes the incoming requests to appropriate Participants based on MTI:

selector = configuration.get(reqIsoMsg.getMTI());
 
4) Different Participants having PREPARE, COMMIT and ABORT methods to handle Transaction requests, Network requests, etc.

Use case: Client concurrently sends several Transaction ISOMsgs with AccountNumber and DebitAmount. A new database connection should be opened for each incoming request. The mentioned AccountNumber in the request should be debited by the DebitAmount and the transaction should be committed. Appropriate OK ISOMsg should be returned back. In case of insufficient fund, an error ISOMsg should be returned back.

With above usecase in mind, what will be the design of the application. 

In my mind, an application has incoming request flow as View -> Controller -> Service -> Dao -> Database and vice versa for the response back. 

Questions:

1) As per my understanding, TransactionManager creates new Participant instances for each incoming requests which run in their own threads with their own context. Is that true ?

​That's not correct. The TM instantiate a single instance for each participant. It uses the flyweight pattern.​ You can't use member variables in your participant implementation, that's the reason the prepare/commit/abort methods accept a Context, that's where you place your per-transaction variables.

So, each participants are singletons but they work on and process data inside Context object passed to them by TransactionManager. As each request has its own Context object, the calls to PREPARE (and COMMIT/ABORT for that matter) for concurrent requests stay independent of each other. Right?
 
 
2) Is this a good design ? ---> Factories invoked inside Participant's PREPARE which returns appropriate Service and DAO objects based on MTI. Once Service and DAO objects are at hand, call the appropriate methods inside them handing over information from incoming ISOMsg. Here I am assuming that since there will a new Participant for each incoming requests and since the Factory methods are invoked inside PREPARE of Participant, I will have independent Service, DAO Objects and Database Connections for each incoming request. Is this assumption correct ?


​No, you need to store your DAO objects and JDBC session pointers in the Context.

With my understanding from (1) above and assuming that everything is processed locally (from (3) below), does the following sound right ?

In PREPARE

java.sql.Connection tranConnection = MyService.getConnection();  // this returns a new database connection object for this individual request
tranConnection
.setAutoCommit(false);
tranRolledBack
= false;
ctx
.put("DBConnection",tranConnection);

boolean allOK = MyService.doMyBusiness(ctx); // all the business logic goes here

if(allOK)
 
return PREPARED;

return ABORT;


In COMMIT

java.sql.Connection tranConnection = (Connection)ctx.get("DBConnection");
tranConnection
.commit();
tranConnection
.close();

In ABORT

java.sql.Connection tranConnection = (Connection)ctx.get("DBConnection");
tranConnection
.rollback();
tranConnection
.close();

So my PREPARE method will return ABORT in case of any business logic error, in which case ABORT method will be called by TransactionManager and any updates/modifications to database will be rolled back for that individual request. 
In a happy flow, PREPARE will return PREPARED, in which case COMMIT method will be called by TransactionManager and any updates/modifications to database will be committed for that individual request. 

All concurrent requests will be similarly processed with their own Context Object and their own DB Connection therein.

Right ?

Alejandro Revilla

unread,
May 23, 2017, 7:33:27 PM5/23/17
to jPOS Users


So, each participants are singletons but they work on and process data inside Context object passed to them by TransactionManager. As each request has its own Context object, the calls to PREPARE (and COMMIT/ABORT for that matter) for concurrent requests stay independent of each other. Right?

​Correct. If you are not using PAUSE, you could also use ThreadLocals to pass data between your participants, but it's easier to use the Context.
 
 
2) Is this a good design ? ---> Factories invoked inside Participant's PREPARE which returns appropriate Service and DAO objects based on MTI. Once Service and DAO objects are at hand, call the appropriate methods inside them handing over information from incoming ISOMsg. Here I am assuming that since there will a new Participant for each incoming requests and since the Factory methods are invoked inside PREPARE of Participant, I will have independent Service, DAO Objects and Database Connections for each incoming request. Is this assumption correct ?


​No, you need to store your DAO objects and JDBC session pointers in the Context.

With my understanding from (1) above and assuming that everything is processed locally (from (3) below), does the following sound right ?

In PREPARE

java.sql.Connection tranConnection = MyService.getConnection();  // this returns a new database connection object for this individual request
tranConnection
.setAutoCommit(false);
tranRolledBack
= false;
ctx
.put("DBConnection",tranConnection);

boolean allOK = MyService.doMyBusiness(ctx); // all the business logic goes here

if(allOK)
 
return PREPARED;

return ABORT;


In COMMIT

java.sql.Connection tranConnection = (Connection)ctx.get("DBConnection");
tranConnection
.commit();
tranConnection
.close();

In ABORT

java.sql.Connection tranConnection = (Connection)ctx.get("DBConnection");
tranConnection
.rollback();
tranConnection
.close();

So my PREPARE method will return ABORT in case of any business logic error, in which case ABORT method will be called by TransactionManager and any updates/modifications to database will be rolled back for that individual request. 
In a happy flow, PREPARE will return PREPARED, in which case COMMIT method will be called by TransactionManager and any updates/modifications to database will be committed for that individual request. 


​Yes, that's more or less what our standard Open and Close participants do, look at them here:


All concurrent requests will be similarly processed with their own Context Object and their own DB Connection therein.


​Sure. That's how it works.

Nik

unread,
May 23, 2017, 9:36:21 PM5/23/17
to jPOS Users
Thanks Alejandro!! :)
Reply all
Reply to author
Forward
0 new messages