Some problems with OrientDB

120 views
Skip to first unread message

secmask

unread,
Aug 13, 2010, 3:33:29 AM8/13/10
to OrientDB
Hi Luca!
I keep testing OrientDB, follow is my code:

String url = "local:/Source/java/releases/0.9.21/db/databases/mydb/
mydb";
int nRecord = 100000;
int block = 100;
Orient.instance().registerEngine(new OEngineRemote());
ODatabaseDocumentTx db = new ODatabaseDocumentTx(
url).open("admin", "admin");
ODocument[] docs = new ODocument[nRecord / block];

//Pre init object, so we don't need create new object
later.
for (int i = 0; i < docs.length; i++) {
docs[i] = new ODocument();
}
BufferedReader reader = new BufferedReader(new
InputStreamReader(System.in));
do {
long start = System.nanoTime();
for (int j = 0; j < block; j++) {
db.begin(TXTYPE.OPTIMISTIC);
System.out.println(new Date() + "\t" + j);
for (int i = 0; i < nRecord / block; i++) {
ODocument doc = docs[i];
doc.reset();
doc.setDatabase(db);
doc.setClassName("Person");
doc.field("name", "secmask-" + j + "." + i);
doc.field("address", "HN");
doc.save();
}
db.commit();
}
System.out.println((System.nanoTime() - start) / 1000000);
} while (("n".equalsIgnoreCase(reader.readLine())));
db.close();
Orient.instance().shutdown();


VM settings:

-server
-XX:+AggressiveOpts
-XX:CompileThreshold=200
-Xms640m
-Xmx640m
-XX:+UseParNewGC
-XX:ParallelGCThreads=20
-XX:+UseConcMarkSweepGC

1. As you can see, I add 100k record per round, each round, commit
transaction every 10k record. After 6 round, i get this:
1. 14938 ms (6.6k/s)
2. 20475 ms
3. 28979 ms
4. 39951 ms
5. 47052 ms
6. 60009 ms (1.6k/s)

1. Time to insert getting longer when there are more number records
append, i don't think it's cause of JVM GC.

2. Although i close and shutdown DB engine, but next time open DB, i
still receive Recovery message:
2010-08-13 01:59:31:813 INFO [OTxSegment] Started the recovering of
pending transactions after a brute shutdown. Found 198801 entry logs.
Scanning...

3. I use JProfiler and got this <a href="http://ca6.upanh.com/
11.597.15802863.ZPN0/8132010114034AM.png" >http://ca6.upanh.com/
11.597.15802863.ZPN0/8132010114034AM.png</a>, the most of CPU usage is
serialization data, I looking at ODocument class and see that you
using ORecordSerializerSchemaAware2CSV as storage format, I try
ORecordSerializerDocument2Binary.NAME instead but it didn't work.

OrientDB architect is the most DB that suite my project, so I'm really
hope there's a way to make faster, especial in put/query data
operation, modify is rarely.

Thanks you very much.

Luca Garulli

unread,
Aug 13, 2010, 4:34:57 AM8/13/10
to orient-database
Hi!

On 13 August 2010 09:33, secmask <sec...@gmail.com> wrote:
Hi Luca!

I strongly suggest you to use the Massive Insert intent. Intents suggest to OrientDB what you're going to do. Massive insert disable the cache to avoid to keep all the records in memory.

database.declareIntent(new OIntentMassiveInsert());
 
1. As you can see, I add 100k record per round, each round, commit
transaction every 10k record. After 6 round, i get this:
1. 14938 ms (6.6k/s)
2. 20475 ms
3. 28979 ms
4. 39951 ms
5. 47052 ms
6. 60009 ms (1.6k/s)

1. Time to insert getting longer when there are more number records
append, i don't think it's cause of JVM GC.
 
It seems to me a memory problem. Insertion 1,000,000 of records takes linear time. Probably the Intent solves your problem. Otherwise I suggest you to simplify your code:
- use only one oDocument instead of a block
- call the commit() + begin() at every 1.000 items (i % 1000)
- call the commit() at the end to commit the last cycle.

2. Although i close and shutdown DB engine, but next time open DB, i
still receive Recovery message:
2010-08-13 01:59:31:813 INFO [OTxSegment] Started the recovering of
pending transactions after a brute shutdown. Found 198801 entry logs.
Scanning...

Does it find some log to recover or they are 0?
 
3. I use JProfiler and got this <a href="http://ca6.upanh.com/
11.597.15802863.ZPN0/8132010114034AM.png" >http://ca6.upanh.com/
11.597.15802863.ZPN0/8132010114034AM.png</a>, the most of CPU usage is
serialization data, I looking at ODocument class and see that you
using ORecordSerializerSchemaAware2CSV as storage format, I try
ORecordSerializerDocument2Binary.NAME instead but it didn't work.

I tested the binary format some months ago and the speed was the same. I don't try it from a long time so probably today it's buggy. 

OrientDB architect is the most DB that suite my project, so I'm really
hope there's a way to make faster, especial in put/query data
operation, modify is rarely.

I'm proud of it. I'm sure OrientDB is the fastest dbms around :-)
 
Thanks you very much.

bye,
Lvc@

secmask

unread,
Aug 13, 2010, 6:11:38 AM8/13/10
to OrientDB
1. I used database.declareIntent(new OIntentMassiveInsert()); it
still get slower after time.
then I run the test again to restart JVM and discard all the cache
object, the speed is much slower than first 100k record.
the reason I use a block is when I use transaction like this:

ODocument doc = new ODocument();
db.begin(TXTYPE.OPTIMISTIC);
for(int i=0;i<100000;i++){
doc.reset();
doc.setDatabase(db);
doc.setClassName("Person");
doc.field("name", "secmask-" + j + "." + i);
doc.field("address", "HN");
doc.save();
}
db.commit();
db.close();

then I saw that it doesn't save any record (I was surprised), I guest
transaction keep reference to ODocument object and if I reset() object
before call commit() will cause no record can be save.
I don't want to create a new ODocument object every put, so I
preCreate them and reset() after every commit();

2. It recovery 0 record, but sometimes i receive error message like
this:
2010-08-13 04:10:41:492 SEVE [DirectByteBuffer] Can't write memory
buffer to disk. Retrying...
2010-08-13 04:10:42:392 SEVE [DirectByteBuffer] Can't write memory
buffer to disk. Retrying...
2010-08-13 04:10:43:261 SEVE [DirectByteBuffer] Can't write memory
buffer to disk. Retrying...

3. But maybe it'll use less disk space than CSV format, as above code,
it take ups to 300MB to store 1,000,000 record. My data is stored for
years, maybe up to thousands of billion record so CSV seem to be so
expensive. It's also take more disk IO cost.


On Aug 13, 3:34 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi!
>

Luca Garulli

unread,
Aug 13, 2010, 1:19:08 PM8/13/10
to orient-database
Hi

On 13 August 2010 12:11, secmask <sec...@gmail.com> wrote:
1. I used  database.declareIntent(new OIntentMassiveInsert());  it
still get slower after time.
then I run the test again to restart JVM and discard all the cache
object, the speed is much slower than first 100k record.
the reason I use a block is when I use transaction like this:

ODocument doc = new ODocument();
db.begin(TXTYPE.OPTIMISTIC);
for(int i=0;i<100000;i++){
   doc.reset();
   doc.setDatabase(db);
   doc.setClassName("Person");
   doc.field("name", "secmask-" + j + "." + i);
   doc.field("address", "HN");
   doc.save();
}
db.commit();
db.close();

then I saw that it doesn't save any record (I was surprised), I guest
transaction keep reference to ODocument object and if I reset() object
before call commit() will cause no record can be save.
I don't want to create a new ODocument object every put, so I
preCreate them and reset() after every commit();

Yes, you're right. If the ODocument instance is the same, the tx believe that the record is always the same... At this point you can:
- leave your block management, but we need to discover where eat memory
- use local database instead of remote. This speed up the insertion because no TCP/IP is involved but, moreover, you can avoid to begin a TX and just use the same ODocument, reset it and save. Can you use local (embedded) mode?
 
2. It recovery 0 record, but sometimes i receive error message like
this:
2010-08-13 04:10:41:492 SEVE [DirectByteBuffer] Can't write memory
buffer to disk. Retrying...
2010-08-13 04:10:42:392 SEVE [DirectByteBuffer] Can't write memory
buffer to disk. Retrying...
2010-08-13 04:10:43:261 SEVE [DirectByteBuffer] Can't write memory
buffer to disk. Retrying...
 
This depends by the OS and the way the JVM treats memory mapped files. There is a well known bug on the jvm since 5 years and never fixed. The best ways is to wait until the OS frees the resources. That is the reason of that messages

3. But maybe it'll use less disk space than CSV format, as above code,
it take ups to 300MB to store 1,000,000 record. My data is stored for
years, maybe up to thousands of billion record so CSV seem to be so
expensive. It's also take more disk IO cost.

Next releases of OrientDB will support encrypted and compressed records, so you could use it when ready.

bye,
Lvc@

secmask

unread,
Aug 13, 2010, 2:01:12 PM8/13/10
to OrientDB
oh, sorry if this line "Orient.instance().registerEngine(new
OEngineRemote());" make you confuse that i'm using network mode.
I copy/paste that line every test class :P , I use url="local:/Source/
java/releases/0.9.21/db/databases/mydb/mydb"; and believe that is
local mode, is that right?
I use Tx in this case just because i think it will make your
persistent-engine flush a big piece of data that include all record in
Tx instead of small record, one-by-one.
I run you benchmarks class "LocalCreateDocumentSpeedTest" and result
as the same, it's so fast at first 30% progress, then get slower,
maybe cause of persistent-engine?

10% lap elapsed: 20916ms, total: 20916ms, delta: +0%, forecast:
209162ms
20% lap elapsed: 35270ms, total: 56186ms, delta: +68%, forecast:
280931ms
30% lap elapsed: 52167ms, total: 108353ms, delta: +47%, forecast:
361177ms
40% lap elapsed: 85233ms, total: 193586ms, delta: +63%, forecast:
483966ms
50% lap elapsed: 153970ms, total: 347556ms, delta: +80%, forecast:
695113ms
60% lap elapsed: 273759ms, total: 621315ms, delta: +77%, forecast:
1035526ms
70% lap elapsed: 376263ms, total: 997578ms, delta: +37%, forecast:
1425113ms
80% lap elapsed: 493778ms, total: 1491356ms, delta: +31%, forecast:
1864197ms

Linear insert time is the most important to me now, I'll try on a
linux box instead of windows as right now.
Thanks you for help and great project.
secmask.

On Aug 14, 12:19 am, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi
>

Luca Garulli

unread,
Aug 13, 2010, 3:08:04 PM8/13/10
to orient-database
Strange because on my laptop the speed of LocalCreateDocumentSpeedTest is linear

-> Started the test of 'LocalCreateDocumentSpeedTest' (1000000 cycles)

 10% lap elapsed:    2829ms, total:    2829ms, delta:  +0%, forecast:   28290ms
 20% lap elapsed:    1752ms, total:    4581ms, delta: -39%, forecast:   22905ms
 30% lap elapsed:    1739ms, total:    6320ms, delta:  -1%, forecast:   21066ms
 40% lap elapsed:    1691ms, total:    8011ms, delta:  -3%, forecast:   20027ms
 50% lap elapsed:    1736ms, total:    9747ms, delta:  +2%, forecast:   19494ms
 60% lap elapsed:    1682ms, total:   11429ms, delta:  -4%, forecast:   19048ms
 70% lap elapsed:    1646ms, total:   13075ms, delta:  -3%, forecast:   18678ms
 80% lap elapsed:    1649ms, total:   14724ms, delta:  +0%, forecast:   18405ms
 90% lap elapsed:    1725ms, total:   16449ms, delta:  +4%, forecast:   18276ms
100% lap elapsed:    1811ms, total:   18260ms, delta:  +4%, forecast:   18260ms

Try to give 1Mb of RAM

secmask

unread,
Aug 13, 2010, 11:43:20 PM8/13/10
to OrientDB
on my ubuntu-linux box, with 1G heap

10% lap elapsed: 25894ms, total: 25894ms, delta: +0%, forecast:
258942ms
20% lap elapsed: 30482ms, total: 56376ms, delta: +17%, forecast:
281881ms
30% lap elapsed: 41363ms, total: 97739ms, delta: +35%, forecast:
325797ms
40% lap elapsed: 59194ms, total: 156933ms, delta: +43%, forecast:
392333ms
50% lap elapsed: 99800ms, total: 256733ms, delta: +68%, forecast:
513467ms
60% lap elapsed: 146218ms, total: 402951ms, delta: +46%, forecast:
671586ms
70% lap elapsed: 262900ms, total: 665851ms, delta: +79%, forecast:
951217ms

I'll try profile it again now.

Luca Garulli

unread,
Aug 14, 2010, 7:03:44 AM8/14/10
to orient-database
Hi,
on the same HW I've noticed that Linux is slower than Windows and MacOSX. Probably is due to the ext3 file system. ext4 is slower too.

My settings are:

-server -XX:+AggressiveOpts -XX:CompileThreshold=200

and my config is:

DELL Notebook model XPS M1530 with Intel(r) Core Duo T7700 2.40Ghz, 3 GB RAM and HD 5.400rpm, O.S. MS Windows Vista, JRE 1.6.0_20-b02

secmask

unread,
Aug 14, 2010, 8:17:12 AM8/14/10
to OrientDB
yes, I've tried on several OS (windows 7, windows xp, ubuntu, centos)
but I cannot get a linear put time(both DocumentDatabase and
ObjectDatabase).
FlatDatabase is ok, I put about 100 millions of record and got a
linear put time.
JProfiler seem not very useful in this case, I make app run really
slower, it take much time to put hundred of thousand items.
So, Luca, Could you run your benchmarks on DocumentDatabase a
difference machine, if it still good, I'm stuck here :(
thanks you.

On Aug 14, 6:03 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi,
> on the same HW I've noticed that Linux is slower than Windows and MacOSX.
> Probably is due to the ext3 file system. ext4 is slower too.
>
> My settings are:
>
> -server -XX:+AggressiveOpts -XX:CompileThreshold=200
>
> and my config is:
>
> DELL Notebook model XPS M1530 with Intel(r) Core Duo T7700 2.40Ghz, 3 GB RAM
> and HD 5.400rpm, O.S. MS Windows Vista, JRE 1.6.0_20-b02
>

Luca Garulli

unread,
Aug 14, 2010, 8:39:19 AM8/14/10
to orient-database
Hi,
on next monday I can have different machines running other OS. So I'll let you know.

Have you tried to remove memory setting when launch your tests? How much memory do you have? What version of OrientDB are you trying?

Lvc@

secmask

unread,
Aug 14, 2010, 9:19:56 AM8/14/10
to OrientDB
What do you mean "remove memory settings", is this "-Xms1G -Xmx1G",
on my machine, JVM using max 256MB as default, so I think using Xmx1G
should get better performance.
My machine has 4GB ram and I'm using OrientDB build from your SVN, I
had checked it out for few days, url is http://orient.googlecode.com/svn/trunk.
secmask.

On Aug 14, 7:39 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi,
> on next monday I can have different machines running other OS. So I'll let
> you know.
>
> Have you tried to remove memory setting when launch your tests? How much
> memory do you have? What version of OrientDB are you trying?
>
> Lvc@
>
> ...
>
> read more »

emanuele

unread,
Aug 16, 2010, 7:23:32 AM8/16/10
to OrientDB
Running trunk version of orient on intel core duo 64 bit 4GB ram with
ubuntu 10.04 64bit version (with other precesses running in
background) :

[echo] MASSIVE INSERT 1,000,000 DOCUMENT RECORDS
[java] -> Started the test of
'LocalCreateDocumentSpeedTest' (1000000 cycles)
[java]
[java] 10% lap elapsed: 4415ms, total: 4415ms, delta:
+0%, forecast: 44150ms
[java] 20% lap elapsed: 2458ms, total: 6873ms, delta:
-45%, forecast: 34365ms
[java] 30% lap elapsed: 2370ms, total: 9243ms, delta:
-4%, forecast: 30810ms
[java] 40% lap elapsed: 2264ms, total: 11507ms, delta:
-5%, forecast: 28767ms
[java] 50% lap elapsed: 2289ms, total: 13796ms, delta:
+1%, forecast: 27592ms
[java] 60% lap elapsed: 2360ms, total: 16156ms, delta:
+3%, forecast: 26926ms
[java] 70% lap elapsed: 5133ms, total: 21289ms, delta:
+117%, forecast: 30412ms
[java] 80% lap elapsed: 2341ms, total: 23630ms, delta:
-55%, forecast: 29537ms
[java] 90% lap elapsed: 3268ms, total: 26898ms, delta:
+39%, forecast: 29886ms
[java] 100% lap elapsed: 2389ms, total: 29287ms, delta:
-27%, forecast: 29287ms

My result is linear, except some picks.

secmask

unread,
Aug 17, 2010, 12:44:23 PM8/17/10
to OrientDB
so strange, what's about your new machine, Luca?

Luca Garulli

unread,
Aug 17, 2010, 1:38:07 PM8/17/10
to orient-database
Hi,
Emanuele is the user that tested it using Linux, but it's a 64-bit version. I don't know if matters.

What File system are you using?

Lvc@

secmask

unread,
Aug 17, 2010, 2:17:23 PM8/17/10
to OrientDB
On linux, I have an OS ubuntu 32bit/ext4, and an other is centos 64
bit ext3. If you want, I will send you a ssh account so you can run
the test your own.
thanks.
secmask.

On Aug 18, 12:38 am, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi,
> Emanuele is the user that tested it using Linux, but it's a 64-bit version.
> I don't know if matters.
>
> What File system are you using?
>
> Lvc@
>

Luca Garulli

unread,
Aug 18, 2010, 4:47:26 AM8/18/10
to orient-database
Why not?

secmask

unread,
Aug 18, 2010, 6:48:20 AM8/18/10
to OrientDB
OK, I've just sent information of account to your mail. I greatly
appreciate your help.

Secmask.
On Aug 18, 3:47 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> Why not?
>

Luca Garulli

unread,
Aug 18, 2010, 10:44:16 AM8/18/10
to orient-database
Hi,
seems that you're using almost all the available RAM:

top - 21:42:18 up 1 day,  9:47,  1 user,  load average: 0.00, 0.03, 0.02
Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.0%sy,  0.0%ni, 98.5%id,  1.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4046528k total,  3888320k used,   158208k free,   158284k buffers
Swap:  8193108k total,      104k used,  8193004k free,  2891084k cached

3,8 GB are taken by OS and all other processes. This could be the reason why OrientDB insertion is not linear: it has no RAM and need to swap.

bye,
Lvc@

secmask

unread,
Aug 18, 2010, 11:38:26 AM8/18/10
to OrientDB
Oh, you can see that almost of RAM are use by system cache, and will
available to application when need. As you can see, only 104k swap in
used, and swap partition doesn't active when run your test.
I've tested this in an other machine that has 48GB ram, more than 10GB
RAM are available, result is the same. So I don't know where's
problems.

On Aug 18, 9:44 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi,
> seems that you're using almost all the available RAM:
>
> top - 21:42:18 up 1 day,  9:47,  1 user,  load average: 0.00, 0.03, 0.02
> Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.2%us,  0.0%sy,  0.0%ni, 98.5%id,  1.2%wa,  0.0%hi,  0.0%si,
>  0.0%st
> Mem:   4046528k total,  3888320k used,   158208k free,   158284k buffers
> Swap:  8193108k total,      104k used,  8193004k free,  2891084k cached
>
> 3,8 GB are taken by OS and all other processes. This could be the reason why
> OrientDB insertion is not linear: it has no RAM and need to swap.
>
> bye,
> Lvc@
>

Luca Garulli

unread,
Aug 18, 2010, 11:55:10 AM8/18/10
to orient-database
Hi,
sorry I not executed the test because the RAM.

Now I've executed the stress-test and the output for the Document is linear! Your machine reached a medium of 33,252 document inserted per second.

Lvc@


[echo] MASSIVE INSERT 1,000,000 DOCUMENT RECORDS
     [java] -> Started the test of 'LocalCreateDocumentSpeedTest' (1000000 cycles)

     [java]  10% lap elapsed:    3897ms, total:    3897ms, delta:  +0%, forecast:   38970ms
     [java]  20% lap elapsed:    2755ms, total:    6652ms, delta: -30%, forecast:   33260ms
     [java]  30% lap elapsed:    2782ms, total:    9434ms, delta:  +0%, forecast:   31446ms
     [java]  40% lap elapsed:    2531ms, total:   11965ms, delta: -10%, forecast:   29912ms
     [java]  50% lap elapsed:    2693ms, total:   14658ms, delta:  +6%, forecast:   29316ms
     [java]  60% lap elapsed:    2567ms, total:   17225ms, delta:  -5%, forecast:   28708ms
     [java]  70% lap elapsed:    3613ms, total:   20838ms, delta: +40%, forecast:   29768ms
     [java]  80% lap elapsed:    2627ms, total:   23465ms, delta: -28%, forecast:   29331ms
     [java]  90% lap elapsed:    3517ms, total:   26982ms, delta: +33%, forecast:   29980ms
     [java] 100% lap elapsed:    2783ms, total:   29765ms, delta: -21%, forecast:   29765ms
     [java] DUMPING STATISTICS (last reset on: Wed Aug 18 22:52:53 ICT 2010)...
     [java]                                               +-------------------------------------------------------------------+
     [java]                                          Name | Value                                                             |
     [java]                                               +-------------------------------------------------------------------+
     [java]                          OMMapManager.usePage | 10999966
     [java]               OTreeMapEntryP.unserializeValue | 15
     [java]                 OTreeMapEntryP.unserializeKey | 15
     [java]                         OMMapManager.loadPage | 275
     [java]                    OMMapManager.pagesUnloaded | 206

     [java] DUMPING CHRONOS (last reset on: Wed Aug 18 22:52:53 ICT 2010). Times in ms...
     [java]                                               +-------------------------------------------------------------------+
     [java]                                          Name |       last      total        min        max    average      items |
     [java]                                               +-------------------------------------------------------------------+
     [java]      ORecordSerializerStringAbstract.toStream |          0       7514          0        129          0    1000000
     [java]                           OStorageLocal.close |        292        292        292        292        292          1
     [java]    ORecordSerializerStringAbstract.fromStream |          0         22          0         14          0         41
     [java]                            OStorageLocal.open |        715        715        715        715        715          1
     [java]                      OStorageLocal.readRecord |          0          3          0          3          0         29
     [java]                    OStorageLocal.createRecord |          0      11680          0       1023          0    1000000
     [java]         OStorageLocal.getClusterElementCounts |          0          0          0          0          0          1
     [java]                         OMMapManager.loadPage |          0          9          0          1          0        275
     [java]                     OTreeMapEntryP.fromStream |          1          1          0          1          0         10
     [java]                         OStorageLocal.foreach |          1          2          1          1          1          2
     [java]                 OTreeMapPersistent.fromStream |          2          7          0          4          0         10
     [java]                                OMetadata.load |         42         42         42         42         42          1

     [java]    Completed the test of 'LocalCreateDocumentSpeedTest' in 30073 ms. Memory used: 310053312
     [java]    Cycles done.........: 1000000/1000000
     [java]    Cycles Elapsed......: 29765 ms
     [java]    Elapsed.............: 30073 ms
     [java]    Medium cycle elapsed: 0.030073
     [java]    Cycles per second...: 33252.418
     [java]    Free memory diff....: 310053312 (61073752->371127064)
     [java]    Total memory diff...: 310771712 (62062592->372834304)
     [java]    Max memory diff.....: 0 (920911872->920911872)

secmask

unread,
Aug 18, 2010, 12:29:47 PM8/18/10
to OrientDB
Oh, I've just re-do benchmarks, It's really good now, how could you do
that? :D

On Aug 18, 10:55 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> Hi,

Luca Garulli

unread,
Aug 18, 2010, 12:36:33 PM8/18/10
to orient-database
I do nothing, simply:

> ant -f build.db.xml clean
> ant -f build.db.xml test
> ant -f build.db.xml stress-test

:-)

secmask

unread,
Aug 18, 2010, 2:13:50 PM8/18/10
to OrientDB
Oh, I've found it, that's the database "demo".
I always create it by console tool with command "create database
local:../databases/demo/demo admin admin local", not by the "test"
pharse in ant build.
I don't know where they are different?, but the database that created
with console tool, it's really slow than the one create by "test".

secmask.

On Aug 18, 11:36 pm, Luca Garulli <l.garu...@gmail.com> wrote:
> I do nothing, simply:
>
> > ant -f build.db.xml clean
> > ant -f build.db.xml test
> > ant -f build.db.xml stress-test
>
> :-)
>

Luca Garulli

unread,
Aug 18, 2010, 2:22:26 PM8/18/10
to orient-database
Maybe it contains some logical clusters. This slow down a lot.

Happy you've resolved.

bye,
Lvc@

secmask

unread,
Aug 19, 2010, 12:59:59 AM8/19/10
to OrientDB
I've looked into the "test" phase, did some jobs as it work. And
finally, I see that I need to create a schema for Document that I want
to use.
Manual create database using console tool doesn't do this job (of
course), It improve much more performance here.
So happy :) problem solved, thanks you Luca.

On Aug 19, 1:22 am, Luca Garulli <l.garu...@gmail.com> wrote:
> Maybe it contains some logical clusters. This slow down a lot.
>
> Happy you've resolved.
>
> bye,
> Lvc@
>
> ...
>
> read more »
Reply all
Reply to author
Forward
0 new messages