According to what you said before, I am inserting data into RDBMSDataStore programmatically to use the default grounding process. But as I mentioned before many recommendation tasks are running in parallel. This creates some conflict in the Database layer as I found out that the same table is used for Metadata and partitions. Let me explain the way I have implemented my code:
There is a PSLRecommendationJob class that is similar to PSL example classes:
public class PSLRecommendationJob Job {
private static final String PARTITION_OBSERVATIONS = "observations";
private static final String PARTITION_TARGETS = "targets";
@Override
public void run() {
Partition obsPartition = dataStore.getPartition(PARTITION_OBSERVATIONS);
Partition targetsPartition = dataStore.getPartition(PARTITION_TARGETS);
definePredicates();
defineRules();
loadData(obsPartition, targetsPartition);
runInference(obsPartition, targetsPartition);
handleResults(targetsPartition);
}
}
My program is a background service that listens for incoming recommendation requests, So, there can be parallel requests at the same time and it creates many instances from
PSLRecommendationJob class. But internally you are using the same H2 database instance for all of the partitions and it causes an error when creating the same PARTITION_OBSERVATIONS in the second instance of PSLRecommendationJob class. I can fix this error by prefixing job_id to PARTITION_OBSERVATIONS, but this makes many different partitions in the lifetime of my application which has no use after the job is completed and I'm not sure if these different partitions ensure that the input data of each recommendation job is sealed from other recommendation jobs!
I hope I described my problem well and looking forward to hearing your suggestions!
Thanks