JDBC: Virtual Graph from Hive

25 views
Skip to first unread message

charbe...@gmail.com

unread,
Dec 27, 2015, 1:49:17 PM12/27/15
to Stardog
Hi all,

I am trying to add virtual graphs from Hive, I was wondering if someone already tried it?

We probably need the following four jars to be added in the dbms folder to make it work?
                <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>

I am not sure if it is jdbc compatibility issue or dependency. (should I merge the four jars into one?)

Any hints?

Thank you,
--C

Zachary Whitley

unread,
Dec 27, 2015, 3:37:15 PM12/27/15
to sta...@clarkparsia.com


On Dec 27, 2015, at 1:49 PM, charbe...@gmail.com wrote:

Hi all,

I am trying to add virtual graphs from Hive, I was wondering if someone already tried it?

No but I doubt that it's going to work. Virtual graphs are based on SQL and even then requires the particular idiosyncrasies of specific databases to be accounted for as reflected in the list of supported databases.[1]

HQL is only SQL like and isn't even a subset of SQL as far as I recall. 


We probably need the following four jars to be added in the dbms folder to make it work?
                <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>

I am not sure if it is jdbc compatibility issue or dependency. (should I merge the four jars into one?)

What issue are you referring to here? Whatever it is I doubt merging jars is going to address it whatever it. 


Any hints?


Give it a try and see what happens but like I said don't be surprised if it doesn't work. I would suspect that you'd have better odds getting something that actually supported SQL to work like Impala but even then I wouldn't be surprised if there were issues. 


Thank you,
--C

--
-- --
You received this message because you are subscribed to the C&P "Stardog" group.
To post to this group, send email to sta...@clarkparsia.com
To unsubscribe from this group, send email to
stardog+u...@clarkparsia.com
For more options, visit this group at
http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en
---
You received this message because you are subscribed to the Google Groups "Stardog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stardog+u...@clarkparsia.com.

charbe...@gmail.com

unread,
Dec 28, 2015, 9:02:24 AM12/28/15
to Stardog
Hi Zachary,

Thanks for your help,

I think you got a point with HQL vs SQL. Maybe I have a better chance if I relied on HBase + Phoenix which exposes ANSI compatible SQL.

Best,
--C

zachary...@wavestrike.com

unread,
Dec 28, 2015, 11:05:39 AM12/28/15
to sta...@clarkparsia.com
On 2015-12-28 09:02, charbe...@gmail.com wrote:
> Hi Zachary,
>
> Thanks for your help,

My pleasure. Full disclosure, I'm just an end user just trying to be
helpful so you'll have to wait to hear back from one of the Stardog
people to get the official word on what you're looking to do.

>
> I think you got a point with HQL vs SQL. Maybe I have a better chance
> if I relied on HBase + Phoenix which exposes ANSI compatible SQL.

I think that would get you closer to the mark. The only thing I'd
recommend is to temper your expectations for performance. By the time
you have enough data to need HBase you have a big shotgun pointed at
your foot. From the documentation "Query performance will be best if the
GRAPH clause for Virtual Graphs is as selective as possible." [1] So
unless you're pulling needles from haystacks you might have issues.

Best of luck. I'd be interested in hearing how successful you are with
it.


[1] http://docs.stardog.com/#_virtual_graphs
>> [1] http://docs.stardog.com/#_supported_rdbmses [2]
>>
>>> Thank you,
>>> --C
>>>
>>> --
>>> -- --
>>> You received this message because you are subscribed to the C&P
>>> "Stardog" group.
>>> To post to this group, send email to sta...@clarkparsia.com
>>> To unsubscribe from this group, send email to
>>> stardog+u...@clarkparsia.com
>>> For more options, visit this group at
>>> http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en [1]
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "Stardog" group.
>>> To unsubscribe from this group and stop receiving emails from it,
>>> send an email to stardog+u...@clarkparsia.com.
>
> --
> -- --
> You received this message because you are subscribed to the C&P
> "Stardog" group.
> To post to this group, send email to sta...@clarkparsia.com
> To unsubscribe from this group, send email to
> stardog+u...@clarkparsia.com
> For more options, visit this group at
> http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en [1]
> ---
> You received this message because you are subscribed to the Google
> Groups "Stardog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to stardog+u...@clarkparsia.com.
>
>
> Links:
> ------
> [1] http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en
> [2] http://docs.stardog.com/#_supported_rdbmses

Evren Sirin

unread,
Dec 28, 2015, 11:59:59 AM12/28/15
to Stardog
On Mon, Dec 28, 2015 at 11:05 AM, <zachary...@wavestrike.com> wrote:
> On 2015-12-28 09:02, charbe...@gmail.com wrote:
>>
>> Hi Zachary,
>>
>> Thanks for your help,
>
>
> My pleasure. Full disclosure, I'm just an end user just trying to be helpful
> so you'll have to wait to hear back from one of the Stardog people to get
> the official word on what you're looking to do.
>
>>
>> I think you got a point with HQL vs SQL. Maybe I have a better chance
>> if I relied on HBase + Phoenix which exposes ANSI compatible SQL.
>
>
> I think that would get you closer to the mark.

Correct. Your virtual mappings might use very simple select queries
that might be valid wrt Hive's SQL-like query syntax but for any
non-trivial SPARQL query a complex SQL query with joins will be
generated. HBase + Phoenix is more likely to work but each SQL
engine/JDBC connector has its own idiosyncrasies so there might still
be some issues. Let us know if you encounter any issues.

> The only thing I'd recommend
> is to temper your expectations for performance. By the time you have enough
> data to need HBase you have a big shotgun pointed at your foot. From the
> documentation "Query performance will be best if the GRAPH clause for
> Virtual Graphs is as selective as possible." [1] So unless you're pulling
> needles from haystacks you might have issues.

That statement was primarily intended for queries that range over both
virtual graphs and regular graphs but it is also true in general. If
you execute a query that would retrieve a very large result set from
the virtual graph that will be very costly.

Best,
Evren

charbe...@gmail.com

unread,
Dec 30, 2015, 8:27:08 PM12/30/15
to Stardog, charbe...@gmail.com
Hello Evren and Zachary,

A quick update on Virtual Graphs with Phoenix+ Hbase.

First I did the experiment with MySQL JDBC connector just to make sure that the mysql.properties and mappings.ttl are correct and one it was working on mysql, I switched to try on Phoenix.

I am relying on HDP 2.3.2, it has a Phoenix installed phoenix-4.4.0.2.3.2.0-2950-client.jar that I am using in all the my tests.
I used the same Phoenix version.jar to run to test first without Stardog but with a simple program inspired by here: http://appcrawler.com/wordpress/2014/11/04/hbase-phoenix-jdbc-example/
Then I am running my simple program : java -cp myPhoenix.client-0.0.1-SNAPSHOT.jar:phoenix-4.4.0.2.3.2.0-2950-client.jar semanticstore.myphoenix.client.HbaseClient

It works fine, I can create a DB, Insert rows and list what I added through select * from etc...

On Stardog:
I copied the :phoenix-4.4.0.2.3.2.0-2950-client.jar in STARDOG_HOME\server\dbms ( I am using developer version 4.0-2)
the command to add the virtual graph is:
stardog-admin.bat virtual add --format r2rml phoenix.properties simple.ttl

jdbc.url=jdbc\:phoenix\:192.168.153.130:2181:/hbase-unsecure
jdbc.username=
jdbc.password=
jdbc.driver=org.apache.phoenix.jdbc.PhoenixDriver

I am having the following error:
ERROR 2006 (INT08): Incompatible jars detected between client and server. Ensure that phoenix.jar is put on the classpath of HBase in every region server: tried
 to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

I am not sure what is happening since it is working outside Stardog with the same phoenix-4.4.0.2.3.2.0-2950-client.jar dependency.

I feel we are not so far, but not sure what is still missing on Stardog part.
Any hints?

Thanks,
-- C

Zachary Whitley

unread,
Dec 31, 2015, 7:50:55 AM12/31/15
to sta...@clarkparsia.com


On Dec 30, 2015, at 8:27 PM, charbe...@gmail.com wrote:

Hello Evren and Zachary,

A quick update on Virtual Graphs with Phoenix+ Hbase.

First I did the experiment with MySQL JDBC connector just to make sure that the mysql.properties and mappings.ttl are correct and one it was working on mysql, I switched to try on Phoenix.

I am relying on HDP 2.3.2, it has a Phoenix installed phoenix-4.4.0.2.3.2.0-2950-client.jar that I am using in all the my tests.
I used the same Phoenix version.jar to run to test first without Stardog but with a simple program inspired by here: http://appcrawler.com/wordpress/2014/11/04/hbase-phoenix-jdbc-example/
Then I am running my simple program : java -cp myPhoenix.client-0.0.1-SNAPSHOT.jar:phoenix-4.4.0.2.3.2.0-2950-client.jar semanticstore.myphoenix.client.HbaseClient

It works fine, I can create a DB, Insert rows and list what I added through select * from etc...

On Stardog:
I copied the :phoenix-4.4.0.2.3.2.0-2950-client.jar in STARDOG_HOME\server\dbms ( I am using developer version 4.0-2)
the command to add the virtual graph is:
stardog-admin.bat virtual add --format r2rml phoenix.properties simple.ttl

jdbc.url=jdbc\:phoenix\:192.168.153.130:2181:/hbase-unsecure
jdbc.username=
jdbc.password=
jdbc.driver=org.apache.phoenix.jdbc.PhoenixDriver

I am having the following error:
ERROR 2006 (INT08): Incompatible jars detected between client and server. Ensure that phoenix.jar is put on the classpath of HBase in every region server: tried
 to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

I am not sure what is happening since it is working outside Stardog with the same phoenix-4.4.0.2.3.2.0-2950-client.jar dependency.

I feel we are not so far, but not sure what is still missing on Stardog part.
Any hints?

My best guess would be that the Phoenix client jar is somehow interacting with zookeeper. Stardog ships with zookeeper to support clustering and that might be causing problems. I'm not sure what you can do about it though.





--
-- --
You received this message because you are subscribed to the C&P "Stardog" group.
To post to this group, send email to sta...@clarkparsia.com
To unsubscribe from this group, send email to
stardog+u...@clarkparsia.com
For more options, visit this group at
http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en

Zachary Whitley

unread,
Dec 31, 2015, 8:02:55 AM12/31/15
to sta...@clarkparsia.com


On Dec 31, 2015, at 7:50 AM, Zachary Whitley <zachary...@wavestrike.com> wrote:



On Dec 30, 2015, at 8:27 PM, charbe...@gmail.com wrote:

Hello Evren and Zachary,

A quick update on Virtual Graphs with Phoenix+ Hbase.

First I did the experiment with MySQL JDBC connector just to make sure that the mysql.properties and mappings.ttl are correct and one it was working on mysql, I switched to try on Phoenix.

I am relying on HDP 2.3.2, it has a Phoenix installed phoenix-4.4.0.2.3.2.0-2950-client.jar that I am using in all the my tests.
I used the same Phoenix version.jar to run to test first without Stardog but with a simple program inspired by here: http://appcrawler.com/wordpress/2014/11/04/hbase-phoenix-jdbc-example/
Then I am running my simple program : java -cp myPhoenix.client-0.0.1-SNAPSHOT.jar:phoenix-4.4.0.2.3.2.0-2950-client.jar semanticstore.myphoenix.client.HbaseClient

It works fine, I can create a DB, Insert rows and list what I added through select * from etc...

On Stardog:
I copied the :phoenix-4.4.0.2.3.2.0-2950-client.jar in STARDOG_HOME\server\dbms ( I am using developer version 4.0-2)
the command to add the virtual graph is:
stardog-admin.bat virtual add --format r2rml phoenix.properties simple.ttl

jdbc.url=jdbc\:phoenix\:192.168.153.130:2181:/hbase-unsecure
jdbc.username=
jdbc.password=
jdbc.driver=org.apache.phoenix.jdbc.PhoenixDriver

I am having the following error:
ERROR 2006 (INT08): Incompatible jars detected between client and server. Ensure that phoenix.jar is put on the classpath of HBase in every region server: tried
 to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

I am not sure what is happening since it is working outside Stardog with the same phoenix-4.4.0.2.3.2.0-2950-client.jar dependency.

I feel we are not so far, but not sure what is still missing on Stardog part.
Any hints?

My best guess would be that the Phoenix client jar is somehow interacting with zookeeper. Stardog ships with zookeeper to support clustering and that might be causing problems. I'm not sure what you can do about it though.


I should add that this is after checking to make sure you haven't somehow accidentally added two different versions of the Phoenix client to the class path. 

charbe...@gmail.com

unread,
Dec 31, 2015, 1:16:57 PM12/31/15
to Stardog
Hello Zachary,

You are absolutely right ! 
Phoenix interacts directly with Zookeeper. And just check Stardog it has a zookeeper.jar 
the jdbc url is pointing to the the zookeeper host and port: jdbc.url=jdbc\:phoenix\:192.168.153.130:2181:/hbase-unsecure

I tried the following:
- deleting the Zookeeper.jar from Stardog_Home/server/pack
- Deleted the same class files between phoenix-client.jar and protobuf.jar
But didn't work so put them back.

I took a closer look to the logs (in the attachments)
Line 121: Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.Me
aTableLocator
I googled this and got the jira, but not sure how to solve it: https://issues.apache.org/jira/browse/HBASE-14126

Another interesting line in the logs are lines 6-7: [SPEC-Server-1-1] ERROR org.apache.hadoop.util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Why the hadoop binaries are needed ?

On the same pc running stardog I can execute  java -cp myPhoenix.client-0.0.1-SNAPSHOT.jar;phoenix-4.4.0.2.3.2.0-2950-client.jar semanticstore.myphoenix.client.HbaseClient.
It works fine, in myPhoenix-client.jar I have only the HBaseClient Class, no other classes are in the jar.

I am not sure what to try next :/

Thanks,
--C
Stardog-Phoenix-JDBC.txt

Zachary Whitley

unread,
Dec 31, 2015, 3:35:10 PM12/31/15
to sta...@clarkparsia.com


On Dec 31, 2015, at 1:16 PM, charbe...@gmail.com wrote:

Hello Zachary,

You are absolutely right ! 
Phoenix interacts directly with Zookeeper. And just check Stardog it has a zookeeper.jar 
the jdbc url is pointing to the the zookeeper host and port: jdbc.url=jdbc\:phoenix\:192.168.153.130:2181:/hbase-unsecure

I tried the following:
- deleting the Zookeeper.jar from Stardog_Home/server/pack
- Deleted the same class files between phoenix-client.jar and protobuf.jar
But didn't work so put them back.

I took a closer look to the logs (in the attachments)
Line 121: Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.Me
aTableLocator
I googled this and got the jira, but not sure how to solve it: https://issues.apache.org/jira/browse/HBASE-14126

Another interesting line in the logs are lines 6-7: [SPEC-Server-1-1] ERROR org.apache.hadoop.util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Why the hadoop binaries are needed ?

On the same pc running stardog I can execute  java -cp myPhoenix.client-0.0.1-SNAPSHOT.jar;phoenix-4.4.0.2.3.2.0-2950-client.jar semanticstore.myphoenix.client.HbaseClient.
It works fine, in myPhoenix-client.jar I have only the HBaseClient Class, no other classes are in the jar.

I am not sure what to try next :/

A quick look around and this page [1] mentions a phoenix-<version>-query-server-thin-client.jar driver and I found a few other references that suggested that you need to add the hbase-client.jar but that it isn't mentioned in any of the documentation. 



<Stardog-Phoenix-JDBC.txt>

Michael Grove

unread,
Jan 5, 2016, 6:28:34 AM1/5/16
to stardog
On Thu, Dec 31, 2015 at 1:16 PM, <charbe...@gmail.com> wrote:
Hello Zachary,

You are absolutely right ! 
Phoenix interacts directly with Zookeeper. And just check Stardog it has a zookeeper.jar 
the jdbc url is pointing to the the zookeeper host and port: jdbc.url=jdbc\:phoenix\:192.168.153.130:2181:/hbase-unsecure

I tried the following:
- deleting the Zookeeper.jar from Stardog_Home/server/pack
- Deleted the same class files between phoenix-client.jar and protobuf.jar
But didn't work so put them back.

I took a closer look to the logs (in the attachments)
Line 121: Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.Me
aTableLocator
I googled this and got the jira, but not sure how to solve it: https://issues.apache.org/jira/browse/HBASE-14126

Yes, as suggested in that ticket, it seems that the problem is that Stardog and HBase use different versions of Guava and you seem to have both in the classpath.  The later version that Stardog uses appears to come first, so Stardog does not complain, but, HBase is expecting the older version which is not being used.

Cheers,

Mike

charbe...@gmail.com

unread,
Jan 5, 2016, 6:03:42 PM1/5/16
to Stardog
Hello Micheal and Zachary

Yes, Stardog is using Guava 18.0 while HBase is using version 12. I tried updating the Guava version in the pom.xml but it seems that I need to modify the code. Will investigate this more, but does not seem to be a clean way.

I followed Zachary recommendation to try the thin client, since it is based on http so no Guava which seemed to be reasonable a workaround.

However, while trying it in Stardog, the properties files contains another "=" jdbc.url=jdbc\:phoenix\:thin:\url\=http://sandbox.hortonworks.com:8765/hbase-unsecure" (see the attached properties file)

I am getting this error: Malformed \uxxxx encoding.
Is there any way to escape the malformed for Stardog?

Thanks, 
-- C
timeseries.properties

Zachary Whitley

unread,
Jan 5, 2016, 6:27:10 PM1/5/16
to sta...@clarkparsia.com
I think the backslash is off at the \url part. It's interpreting \u as introducing a Unicode code point and since what follows isn't valid...BOOM! I think you were looking to escape the : before it. 
<timeseries.properties>

charbe...@gmail.com

unread,
Jan 6, 2016, 3:26:36 PM1/6/16
to Stardog
Hello Zachary,

yes, you are right,\ and : not in the correct order.

I have no suitable driver error now: No suitable driver found for jdbc:phoenix:thin:url=http://192.168.153.130:8765"
 For some reason Stardog cannot see the .jar? or the org.apache.phoenix.queryserver.client.Driver in the jar?
I wonder if there is something missing in the manifest.

I attached the jar file in case (https://drive.google.com/folderview?id=0B-xegZTg87qVd01GM2JTNl9NUHM&usp=sharing)
It works fine from outside stardog with a simple program to read DB tables ...

Best,
-- C
timeseries.properties

Evren Sirin

unread,
Jan 6, 2016, 4:42:30 PM1/6/16
to Stardog
I see that there is a problem with automatically loading the JDBC
driver through the service loader (jar file defines a different driver
class for the class loader). We will fix this issue but as a
workaround you can start the server as follows to make sure the driver
will be loaded:

$ STARDOG_JAVA_ARGS="-Djdbc.drivers=org.apache.phoenix.queryserver.client.Driver"
bin/stardog-admin server start


Best,
Evren

charbel kaed

unread,
Jan 6, 2016, 6:33:03 PM1/6/16
to sta...@clarkparsia.com
Thanks Evren !! It worked well !! I am capable of importing the Virtual DB from HBase through the thinClient (Http) ! 

But the queries seem to provoke errors at Stardog or some kind of loops at the replace part?
I tried a simple query:
SELECT * {
   GRAPH <virtual://timeseries> {
      ?a a ?b .
          
   }
}

Error!

com.complexible.stardog.plan.eval.operator.OperatorException: Error executing SQL query:  error while executing SQL "SELECT     1 AS "aQuestType", NULL AS "aLang", ('http://www.myontology/Clients/CompanyA/Installation123#' || REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(QVIEW1."IDENT",' ', '%20'),'!', '%21'),'@', '%40'),'#', '%23'),'$', '%24'),'&', '%26'),'*', '%42'), '(', '%28'), ')', '%29'), '[', '%5B'), ']', '%5D'), ',', '%2C'), ';', '%3B'), ':', '%3A'), '?', '%3F'), '=', '%3D'), '+', '%2B'), '''', '%22'), '/', '%2F')) AS "a",     1 AS "bQuestType", NULL AS "bLang", 'http://www.myontology/partner/Devices#TimeSeries' AS "b"  FROM  TS8 QVIEW1 WHERE  QVIEW1."IDENT" IS NOT NULL": response code 500 SQL query:  SELECT     1 AS "aQuestType", NULL AS "aLang", ('http://www.myontology/Clients/CompanyA/Installation123#' || REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(QVIEW1."IDENT",' ', '%20'),'!', '%21'),'@', '%40'),'#', '%23'),'$', '%24'),'&', '%26'),'*', '%42'), '(', '%28'), ')', '%29'), '[', '%5B'), ']', '%5D'), ',', '%2C'), ';', '%3B'), ':', '%3A'), '?', '%3F'), '=', '%3D'), '+', '%2B'), '''', '%22'), '/', '%2F')) AS "a",     1 AS "bQuestType", NULL AS "bLang", 'http://www.myontology/partner/Devices#TimeSeries' AS "b"  FROM  TS8 QVIEW1 WHERE  QVIEW1."IDENT" IS NOT NULL


You received this message because you are subscribed to a topic in the Google Groups "Stardog" group.
To unsubscribe from this topic, visit https://groups.google.com/a/clarkparsia.com/d/topic/stardog/HCx1LyWxTK0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stardog+u...@clarkparsia.com.


Zachary Whitley

unread,
Jan 6, 2016, 7:37:23 PM1/6/16
to sta...@clarkparsia.com
It doesn't look like Phoenix supports the REPLACE function [1]. The closest is the REGEX_REPLACE. It's long but not a loop. It looks like it's just URL encoding the result. 

It looks like you're down to needing support for the particular capabilities of Phoenix.


charbe...@gmail.com

unread,
Jan 7, 2016, 10:19:24 AM1/7/16
to Stardog
Hello,

This means we can not use Phoenix as it is.
I will try Apache Ignite later this week.
It seems compatible with H2 DB, Stardog seems to be also .
https://apacheignite.readme.io/docs/sql-queries

Best,
-- C

charbe...@gmail.com

unread,
Jan 14, 2016, 6:29:02 PM1/14/16
to Stardog, charbe...@gmail.com
Hi,

I am giving it a last shot by testing with UnityJDBC.

I am getting the following error: 

Error!

com.complexible.stardog.plan.eval.operator.OperatorException: Dangling meta character '*' near index 0  *  ^


Any hints?


Thanks,

Charbel

Reply all
Reply to author
Forward
0 new messages