How neo4j can handle multiple read requests ?

681 views
Skip to first unread message

Anas

unread,
May 23, 2013, 3:28:53 AM5/23/13
to ne...@googlegroups.com
Hello all,

I am attempting to implement autocomplete in my python/django based webapp with a neo4j db.

To do that, i am using what is being typed by the user as a prefix, and send it to my backend using ajax requests, so i can rely on the server plugin i've created before to get an iterable of Nodes corresponding to the request.

The problem i am encountering is shown as an example:

Assume that the user wants to search for the word "atlanta", so he starts typing "a" then the first ajax request is sent to the backend, and as soon as the types "t" , there is a second ajax request that is sent. Here i notice that the backend tries to search for the prefix "at" while the search for the prefix "a" is being performed (since i my application is multithreaded). And that ends finally by a crash and i don't get any result neither for the first prefix nor for the second one.

I could overcome this problem by creating a mutual exclusion zone using a Lock, but it's not correct as there would be only one user who will be able to use it at the same time.

So how can i handle all those parallel requests that are being sent to the neo4j db ? 

Thanks for your help 

Michael Hunger

unread,
May 23, 2013, 7:59:41 AM5/23/13
to ne...@googlegroups.com
What kind of crash to you see?

Do you wait the usual 250 ms after the last typed char before starting the search and do you only send one request at a time from the UX (use setTimeout and a flag for running requests) ?

Anas Zakki

unread,
May 23, 2013, 8:07:59 AM5/23/13
to ne...@googlegroups.com
What i meant by crash is that i keep waiting for the search results but finally i receive nothing and get instead a html 500 error ( due to timer expiration i guess)

concerning the ajax requests, actually i didn't use a timer to fire the keyup event , i'll try to use one as you said.

FYI, I am relying on the python rest client (https://neo4j-rest-client.readthedocs.org/en/latest/info.html) to get an instance of the database in my python code, that i use to call the server plugin method. And this instance is actually a global variable.


--
You received this message because you are subscribed to a topic in the Google Groups "Neo4j" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/neo4j/pL9nO6HgTa0/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to neo4j+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Anas Zakki

unread,
May 23, 2013, 8:17:13 AM5/23/13
to ne...@googlegroups.com
i added a timer of 250ms before firing the keyup event, but i am still having the same problem. 

here are some snippets of the code i am using: 

JQuery:  
var to;
    clearTimeout(to);
    to = setTimeout(function() {
        //while the user didn't press the enter key, we complete its words    
        $( '#query' ).keyup(function(event){  
            autocomplete();
         
        });
           }, 250 ); 

python : 

//libraries
from neo4jrestclient import constants
from neo4jrestclient.client import *
from neo4jrestclient.client import Node

//get an instance of gdb
gdb = GraphDatabase(settings.NEO4J_URL)

//call the server plugin 
nodes = gdb.extensions.CustomQuery.makeCustomQuery(query=query, searchType=searchType, max=maximum) 

Michael Hunger

unread,
May 23, 2013, 8:40:43 AM5/23/13
to ne...@googlegroups.com
var to;
        //while the user didn't press the enter key, we complete its words    
        $( '#query' ).keyup(function(event){     
               clearTimeout(to);
           to = setTimeout(autocomplete, 250 ); 
        });

           
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+un...@googlegroups.com.

Anas Zakki

unread,
May 23, 2013, 9:01:26 AM5/23/13
to ne...@googlegroups.com
having always the same problem ! 
i am suspecting my app's multithreading to be the problem. As i said, once i use a mutex, it's working fine, but it's not viable ... 

Actually, i don't know what is exactly happening in the database when it receives two  requests ( or more) ...

Michael Hunger

unread,
May 23, 2013, 9:05:39 AM5/23/13
to ne...@googlegroups.com
They are executed in parallel.

if it hangs, can use execute a kill -3 pid

where the pid is the pid of the neo4j process? Then Neo4j logs a thread dump to data/logs/console.log I think.

It would be also good if you could provide graph.db/messages.log

Michael

Anas Zakki

unread,
May 23, 2013, 9:12:38 AM5/23/13
to ne...@googlegroups.com
here is the logs file 

Anas
messages.log

Michael Hunger

unread,
May 23, 2013, 9:48:39 AM5/23/13
to ne...@googlegroups.com
You have too little memory configured for the server. Please configure 1G at least in conf/neo4j-wrapper.conf

Also change your disk scheduler in linux from cfq to noop or deadline.


And you didn't supply the thread dump when it hangs (I assume it is cleaning up memory)

And please upgrade to 1.9.GA

Michael

<messages.log>

Anas Zakki

unread,
May 23, 2013, 10:45:43 AM5/23/13
to ne...@googlegroups.com
Still does not work .

I edited neo4j-wrapper.conf and chose this configuration :
 # Initial Java Heap Size (in MB)
#wrapper.java.initmemory=1024

# Maximum Java Heap Size (in MB)
#wrapper.java.maxmemory=1024

i modified the disk scheduler as well : 
root@anas-desktop:/sys/block/sda/queue# cat scheduler 
[noop] anticipatory deadline cfq

acutally i didn't understand what you meant by this "  you didn't supply the thread dump when it hangs (I assume it is cleaning up memory)" ? 

and here is the new messages.log file : 

Thanks


messages.log

Michael Hunger

unread,
May 23, 2013, 11:19:34 AM5/23/13
to ne...@googlegroups.com
You have to remove the comment sign see below

Sent from mobile device

Am 23.05.2013 um 16:45 schrieb Anas Zakki <z.an...@gmail.com>:

Still does not work .

I edited neo4j-wrapper.conf and chose this configuration :
 # Initial Java Heap Size (in MB)
wrapper.java.initmemory=1024

# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=1024
<messages.log>

Anas Zakki

unread,
May 23, 2013, 4:23:08 PM5/23/13
to ne...@googlegroups.com
sorry, I didn't pay attention to the comment tag 

It is working now but when i try something like "azenraizeunrize" , ajax requests still in the pending state and i get back a 500 error (like what i was getting before ) ! 

Do you think it is a matter of memory always ? i mean if i increase the memory allocated, there should be no more behavior like this ? 

Thanks Michael :) 

Michael Hunger

unread,
May 23, 2013, 4:38:35 PM5/23/13
to ne...@googlegroups.com
Could you share your project somewhere,

this should not be an issue, except if you send 100 parallel requests per second or so.

Michael
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Anas

unread,
May 23, 2013, 6:02:03 PM5/23/13
to ne...@googlegroups.com
I fear i cannot share the entire project sources as there are some privacy issues i agreed on (before i started my internship) 

However, i can share some code snippets corresponding to what is happening to perform autocomplete : 

-html view : search field 
<input id="query"  type="text" autocomplete="off "/>

- then i get what is being typed in JQuery and send an ajax request : 

function autocomplete(){
    query = $( '#query' ).val();
//uses bootstrap typeahead
    $('#query' ).typeahead ({
        source: function(query, process){
            suggestions=[];
            //JSON GET request to trigger data retrieval from the DB using the manager
              $.getJSON('/engine', {q : query, type:type , trigger:"autocomplete"},  function(data){
                    console.log("compteur "+ cpt++);
                    //if we have found data
                    if(data.length > 0){
                        //build the suggestion list 
                          //.... analyse json qnd create the list

                        //display suggestions    
                        process(suggestions);
                        suggestions.length=0;
                    }
                }); 
        }          
    });
}

$( document ).ready( function() {
            var to;
  cleartimeout(to);
to = setTimeout(autocomplete, 250 ); 
        });
});


the answer is sent from views.py that queries the db : 
in views.py : (i am using python / django ) :

 def handler(request):
    """
    Handler for ajax requests for the view.
    """
    template = 'engine/search.html'
   
    if request.is_ajax():
       if trigger=="autocomplete" :
            query = request.GET.get( 'q' )
            searchType=request.GET.get('type')
          results=[]
#gdb already initialized using python restclient for neo4j    
          nodes = gdb.extensions.CustomQuery.makeCustomQuery(query=query, searchType=searchType, max=10) 
for node in nodes:
      
        results.append(node.properties)           
         return HttpResponse(json.dumps(results),mimetype='application/json')


and finally the server plugin makeCustomQuery : 
public class CustomQuery extends ServerPlugin {


@Description("Make a custom query and limit the results")
@PluginTarget(GraphDatabaseService.class)
public static Iterable<Node> makeCustomQuery(
@Source GraphDatabaseService graphDb,
@Description("The query to be looked for") @Parameter(name = "query") String query,
@Description("Querying for ?") @Parameter(name = "searchType") String searchType,
@Description("The maximum number of results.") @Parameter(name = "max") int max) {
List<Node> results= new ArrayList<Node>();
if(query.length()!=0 ) {
String n4jQuery= null;
if(searchType.equals("airports" )){ 
n4jQuery= "START root=node:types(\"type:airport\") match n-[:IS]->root WHERE n.name =~ \"(?i)"+query+".*\" OR n.iata =~ \"(?i)"+query+".*\" RETURN distinct n limit "+max;
}else if(searchType.equals("airlines" )){ 
n4jQuery= "START root=node:types(\"type:airline\") match n-[:IS]->root WHERE n.name =~ \"(?i)"+query+".*\" OR n.icao =~ \"(?i)"+query+".*\" RETURN distinct n limit "+max;
}else if(searchType.equals("cities") ){ 
n4jQuery= "START root=node:types(\"type:city\") match n-[:IS]->root WHERE n.name =~ \"(?i)"+query+".*\" RETURN distinct n limit "+max;
}else if(searchType.equals("countries")){ 
n4jQuery= "START root=node:types(\"type:country\") match n-[:IS]->root WHERE n.name =~ \"(?i)"+query+".*\" RETURN distinct n limit "+max;
}
ExecutionEngine engine= new ExecutionEngine(graphDb);
ExecutionResult result= engine.execute(n4jQuery);
Iterator<Node> iterator= result.columnAs("n");
int k= 0;
while(iterator.hasNext() && k<max) {
Node nd= iterator.next();
results.add(nd);
//System.out.println(nd.getProperty("name"));
k++;
}
}
return results;
}
}

Hope this is enough to understand what i am (and how ) trying to do :) 

Michael Hunger

unread,
May 23, 2013, 6:12:11 PM5/23/13
to ne...@googlegroups.com
I think you should performance test your ServerPlugin first, extract the search functionality into a separate class and use that from both the plugin and the test:
#0 please upgrade to 1.9.GA
#1 don't create the executionengine in the search method but in the constructor or lazy like in my code example
#2 use lucene lookups to find the data not complete graph queries
#3 use parameters, no string concatenation of queries
#4 don't use distinct, do that in your own code

START root=node:types(\"type:airport\") match n-[:IS]->root WHERE n.name =~ \"(?i)"+query+".*\" OR n.iata =~ \"(?i)"+query+".*\" RETURN distinct n limit {max}"+max

#4 don't use cypher but the core API:
- index your nodes by type in different indexes

// create a lucene query object for ultra-fast query (an OR of a TermQuery), otherwise use text query

String luceneQuery="name: "+query+"* OR iata:"+query+"*" 
IndexHits<Node> nodes=db.index().forNodes("airport").query(luceneQuery);

List<Node> result=new ArrayList(max);
for (Node n : nodes) {
    result.add(n);
    if (result.size()==max) break;
}
nodes.close();

return result;



Michael
            var to;
  cleartimeout(to);
to = setTimeout(autocomplete, 250 ); 
        });
Ouuuuff !! 

Hope this is enough to understand what i am (and how ) trying to do :) 


On Thu, May 23, 2013 at 11:56 PM, Anas Zakki <z.an...@gmail.com> wrote:
I fear i cannot share the entire project sources as there some privacy issues i agreed (before i started my internship) 
            var to;
  cleartimeout(to);
to = setTimeout(autocomplete, 250 ); 
        });
Ouuuuff !! 

Hope this is enough to understand what i am (and how ) trying to do :) 


On Thu, May 23, 2013 at 11:50 PM, Anas Zakki <z.an...@gmail.com> wrote:
I fear i cannot share the entire project sources as there some privacy issues i agreed (before i started my internship) 
            var to;
  cleartimeout(to);
to = setTimeout(autocomplete, 250 ); 
        });
});


in views.py : (i am using python / django ) :

 def handler(request):
    """
    Handler for ajax requests for the view.
    """
    template = 'engine/search.html'
   
    if request.is_ajax():
       if trigger=="autocomplete" :
            query = request.GET.get( 'q' )
            searchType=request.GET.get('type')
          
#gdb already initialized using python restclient for neo4j    
          nodes = gdb.extensions.CustomQuery.makeCustomQuery(query=query, searchType=searchType, max=maximum) 
for node in nodes:
      
        results.append(node.properties)

Anas Zakki

unread,
May 26, 2013, 12:44:05 PM5/26/13
to ne...@googlegroups.com
Hi Michael, 

I executed the queries i am using in the server plugin from the Rest interface, and i noticed that sometimes they are executed in 90 to 200 ms, and other times it took up to 18000ms ! and it is obvious that with such durations, it is impossible to do autocomplete correctly.
i re-indexed my DB so to get an index on each existing type (airport, airlines ...) and tried to use the core API, and Lucene lookups  as you suggested to get better performance.

But i am having some problems to "translate" the cypher queries i am using into Lucene queries: 

I )) For a simple query like this :
 start n= node:airlines("*:*") where   n.iata =~ "(?i)pAr.*" OR n.name =~ "(?i)pAr.*"
return   n   (you may notice that this query permit non case sensitive search ) 

the equivalent using core API : 

IndexHits<Node> nodes=graphDb.index().forNodes("airlines").query("name: pAr* OR iata: pAr*");

the index airlines was created using the following : 
airlineIndex = indexProvider.nodeIndex( "airlines", MapUtil.stringMap( "type", "fulltext" ) );

I have two problems in the last case  : 
1) I found that this query is "case sensitive" 
2) if i have a compound query such as "San Fran" , i can't neither use the wild card to perform autocomplete nor make a simple query !
(i saw that the second case was evoked here : http://docs.neo4j.org/chunked/snapshot/indexing-lucene-extras.html#indexing-lucene-query-objects but using a WildcardQuery didn't work for me , in addition to the fact that i need an OR clause as i will have to conditions ) 

II )) for a query more complicated : 

Cypher query :  "START al= node(id) match ond-[:operatedBy]->al, ond-[:origin]->orig, ond-[:destination]->dest RETURN orig, dest ;"

Core API : i don't even know how to transform this latter ! (there are many MATCH clauses ) 

Any ideas about this ? 

Thanks for your help ! 

Anas


Anas Zakki

unread,
May 27, 2013, 3:29:29 AM5/27/13
to ne...@googlegroups.com
What i am trying to figure out here ((
I )) For a simple query like this :
 start n= node:airlines("*:*") where   n.iata =~ "(?i)pAr.*" OR n.name =~ "(?i)pAr.*"
return   n   (you may notice that this query permit non case sensitive search ) 

the equivalent using core API : 

IndexHits<Node> nodes=graphDb.index().forNodes("airlines").query("name: pAr* OR iata: pAr*");
 ))
is : 

How can i use wildcard for compound words : in the example i used " pAr* " to search for "Paris" , but to get the word "san francisco" from the prefix "san fra" , querying "san fra*" returns an error when using the core APi, but it works with cypher queries ! In  addition to the non caase sensitive search available in cypher queries.

Any clue to solve this ? 

Many thanks 

Michael Hunger

unread,
May 27, 2013, 3:33:51 AM5/27/13
to ne...@googlegroups.com
for words with spaces you have to quote them in lucene, but if it is a fulltext index you will probably have to split the input (as it is splitted by whitespace in the index as well)

so try this:
#1 double quotes
IndexHits<Node> nodes=graphDb.index().forNodes("airlines").query("name: \"san fra\"* OR iata: \"san fra\"*");

#2 split input 

IndexHits<Node> nodes=graphDb.index().forNodes("airlines").query("name: (san fra*) OR iata: (san fra*)");
Message has been deleted

Anas

unread,
May 27, 2013, 5:32:31 AM5/27/13
to ne...@googlegroups.com
#2 split input 
Worked perfectly :) 

( #1 double quotes
Returns an exception 
)
Message has been deleted
Message has been deleted

Anas Zakki

unread,
May 27, 2013, 9:06:34 AM5/27/13
to ne...@googlegroups.com
I have implemented  a new server plugin based on the core API and upgraded to Neo4j 1.9 ( the final release) 

 i'm  still seeing pending requests in my chrome console when several requests are being sent ( even if neo4j memory is higher than 1 Gb) that end eventually by a 500 error with this message : 

Error [500]: Internal Server Error. Server got itself in trouble.
Invalid data sent
please find attached the messages.log file : 

the server plugin : 

@Description("Custom Query Plugin")
public class CustomQuery extends ServerPlugin {


@Description("Make a custom query and limit the results")
@PluginTarget(GraphDatabaseService.class)
public static Iterable<Node> makeCustomQuery(
@Source GraphDatabaseService graphDb,
@Description("The query to be looked for") @Parameter(name = "query") String query,
@Description("Querying for ?") @Parameter(name = "searchType") String searchType,
@Description("The maximum number of results.") @Parameter(name = "max") int max) {
List<Node> results= new ArrayList<Node>();
if(query.length()!=0 ) {

String luceneQuery="name: ("+query+"*)";
String indexType= null;
if(searchType.equals("airports" )){ 
luceneQuery += " OR iata:("+query+"*)" ;
indexType= "airports" ;
}else if(searchType.equals("airlines" )){ 
luceneQuery += " OR iata:("+query+"*)" ;
indexType= "airlines" ;

}else if(searchType.equals("cities" )){ 
indexType= "cities" ;
}else if(searchType.equals("countries" )){ 
indexType= "countries";

}
IndexHits<Node> nodes=graphDb.index().forNodes(indexType).query(luceneQuery);
if(nodes.size() != 0){
for (Node n : nodes) {
    results.add(n);
    if (results.size()==max) break;
}
nodes.close();
}
}
return results;
}
}

the logs: 

Thanks


On Mon, May 27, 2013 at 1:49 PM, Anas Zakki <z.an...@gmail.com> wrote:

I have implemented  a new server plugin based on the core API and upgraded to Neo4j 1.9 ( the final release) 

 i'm  still seeing pending requests in my chrome console when several requests are being sent ( even if neo4j memory is higher than 1 Gb) that end eventually by a 500 error with this message : 
Error [500]: Internal Server Error. Server got itself in trouble.
Invalid data sent

i noticed also that sometimes ( for a reason that i ignore) , a 500 error is directly sent with this error message : 
Error [500]: Internal Server Error. Server got itself in trouble.
Invalid data sent: Index `types` does not exist

while i am not using any index named "types" !?

i used the commad kill -3 pid to get the logs .

the server plugin : 

@Description("Custom Query Plugin")
public class CustomQuery extends ServerPlugin {


@Description("Make a custom query and limit the results")
@PluginTarget(GraphDatabaseService.class)
public static Iterable<Node> makeCustomQuery(
@Source GraphDatabaseService graphDb,
@Description("The query to be looked for") @Parameter(name = "query") String query,
@Description("Querying for ?") @Parameter(name = "searchType") String searchType,
@Description("The maximum number of results.") @Parameter(name = "max") int max) {
List<Node> results= new ArrayList<Node>();
if(query.length()!=0 ) {

String luceneQuery="name: ("+query+"*)";
String indexType= null;
if(searchType.equals("airports" )){ 
luceneQuery += " OR iata:("+query+"*)" ;
indexType= "airports" ;
}else if(searchType.equals("airlines" )){ 
luceneQuery += " OR iata:("+query+"*)" ;
indexType= "airlines" ;

}else if(searchType.equals("cities" )){ 
indexType= "cities" ;
}else if(searchType.equals("countries" )){ 
indexType= "countries";

}
IndexHits<Node> nodes=graphDb.index().forNodes(indexType).query(luceneQuery);
if(nodes.size() != 0){
for (Node n : nodes) {
    results.add(n);
    if (results.size()==max) break;
}
nodes.close();
}
}
return results;
}
}

the logs: 

Thanks



On Mon, May 27, 2013 at 1:39 PM, Anas Zakki <z.an...@gmail.com> wrote:
I have implemented  a new server plugin based on the core API and upgraded to Neo4j 1.9 ( the final release) 

 i'm  still seeing pending requests in my chrome console when several requests are being sent ( even if neo4j memory is higher than 1 Gb) that end eventually by a 500 error with this message : 
Error [500]: Internal Server Error. Server got itself in trouble.
Invalid data sent

i used the commad kill -3 pid to get the logs .

the server plugin : 

@Description("Custom Query Plugin")
public class CustomQuery extends ServerPlugin {


@Description("Make a custom query and limit the results")
@PluginTarget(GraphDatabaseService.class)
public static Iterable<Node> makeCustomQuery(
@Source GraphDatabaseService graphDb,
@Description("The query to be looked for") @Parameter(name = "query") String query,
@Description("Querying for ?") @Parameter(name = "searchType") String searchType,
@Description("The maximum number of results.") @Parameter(name = "max") int max) {
List<Node> results= new ArrayList<Node>();
if(query.length()!=0 ) {

String luceneQuery="name: ("+query+"*)";
String indexType= null;
if(searchType.equals("airports" )){ 
luceneQuery += " OR iata:("+query+"*)" ;
indexType= "airports" ;
}else if(searchType.equals("airlines" )){ 
luceneQuery += " OR iata:("+query+"*)" ;
indexType= "airlines" ;

}else if(searchType.equals("cities" )){ 
indexType= "cities" ;
}else if(searchType.equals("countries" )){ 
indexType= "countries";

}
IndexHits<Node> nodes=graphDb.index().forNodes(indexType).query(luceneQuery);
if(nodes.size() != 0){
for (Node n : nodes) {
   results.add(n);
   if (results.size()==max) break;
}
nodes.close();
}
}
return results;
}
}

the logs: 

Thanks
messages.log
Message has been deleted

Anas Zakki

unread,
May 28, 2013, 4:57:06 AM5/28/13
to ne...@googlegroups.com
actually i saw several messages like the following in the logs and i suspect them to be the reflection of the pending request in the client side:

INFO  [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 1836ms [total block time: 7.769s]

What does this message mean ? 

Thanks

Michael Hunger

unread,
May 28, 2013, 8:22:39 AM5/28/13
to ne...@googlegroups.com
Thats the time spent in garbage collection.

As I said I think it has too little memory or there is a memory leak.
Do you use streaming?

I would need the database (or a generator) to profile what happens.

Michael
Cheers,

Michael

(neo4j.org) <-[:WORKS_ON]- (@mesirii) -[:TAKES_CARE_OF]-> (you) -[:WORKS_WITH]->(@Neo4j)






Anas Zakki

unread,
May 28, 2013, 9:42:30 AM5/28/13
to ne...@googlegroups.com
i put it here 
http://we.tl/QDTvJFSiWx

Thanks


Michael Hunger

unread,
May 29, 2013, 10:22:12 AM5/29/13
to ne...@googlegroups.com
I created a small project that simulates parallel requests.

There I can without any other config run 10k requests in 4.6 seconds finding 340k  results.

Can you run it on your machine and tell me what it does?

Cheers

Michael

anas_zakki.zip

Anas Zakki

unread,
May 29, 2013, 10:53:27 AM5/29/13
to ne...@googlegroups.com
Michael,

i ran AirlineQueryTest.java and i got this : total = 340000 took 9679 ms.

it seems that the query on my computer took the double of what you've got ! 

Please note that i'm running all this on a virtual machine to which i allocated 1.5Gb of RAM with a core i7 - 3.4GHZ processor

Cheers

Anas

Michael Hunger

unread,
May 29, 2013, 11:16:06 AM5/29/13
to ne...@googlegroups.com
You should probably get a machine with more RAM ?

I have a iMac with 16GB but can limit the JVM to less. Say 512M

Still fast: total = 340000 took 4409 ms.
Try to use less threads, as you have fewer CPUs I have 8 virtual ones.

Perhaps this is interesting for you: http://visualizing.org/datasets/global-flights-network

Anas Zakki

unread,
May 29, 2013, 11:20:29 AM5/29/13
to ne...@googlegroups.com
Actually, i've been said in that the company will upgrade computer's RAM from 4Gb to 8Gb this friday 

I'll try again one this done :) 

PS: how can i use less threads ?? is it possible to configure this from the Oracle VM ?? 

Thanks 

Anas


--
You received this message because you are subscribed to a topic in the Google Groups "Neo4j" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/neo4j/pL9nO6HgTa0/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to neo4j+un...@googlegroups.com.

Michael Hunger

unread,
May 29, 2013, 1:04:16 PM5/29/13
to ne...@googlegroups.com
Change the thread count in the code

Sent from mobile device

Anas Zakki

unread,
Jun 26, 2013, 8:47:31 AM6/26/13
to ne...@googlegroups.com
Hello Michael,

First of all, Thank you for the links ( actually I've just seen them ... ) 

The upgrade have just been done and I allocated 4Gb to my virtual machine, and 2Gb to Neo4j.

Now the AirlineQueryTest.java is executed in 6s. I think that it's due to the fact that I'm using a VM.

Well, the most important thing is that autocomplete is now working well and there is no more pending requests. BUT I noticed that it was working fine ONLY AFTER having made some requests to DB and left it running for a while !! 

Is it normal ?


Michael Hunger

unread,
Jun 26, 2013, 2:16:16 PM6/26/13
to ne...@googlegroups.com
Yes, caches that are warmed up

You can do the warmup yourself by executing relevant queries directly after startup

Michael

Sent from mobile device

Anas Zakki

unread,
Jun 27, 2013, 9:09:57 AM6/27/13
to ne...@googlegroups.com
This is exactly what I did

Thanks again :)

Anas


Reply all
Reply to author
Forward
0 new messages