Too many open files.

109 views
Skip to first unread message

Lance

unread,
Feb 4, 2014, 4:55:19 AM2/4/14
to graphchi...@googlegroups.com

I run "unittest.sh" in the root directory. Then I got the folowing results. After careful analysis, I couldn't figure out what I should do.

My environment includes Max OS X Mavericks and gcc-4.2. 


*******************TESTING ALS-TENSOR**********************

****************************************************************************

WARNING:  common.hpp(print_copyright:197): GraphChi Collaborative filtering library is written by Danny Bickson (c). Send any  comments or bug reports to danny....@gmail.com 

[training] => [time_smallnetflix]

[validation] => [time_smallnetflixe]

[lambda] => [0.065]

[minval] => [1]

[maxval] => [5]

[max_iter] => [6]

[K] => [27]

[quiet] => [1]

[clean_cache] => [1]


 === REPORT FOR sharder() ===

[Timings]

edata_flush: 2.27904s (count: 153, min: 0.003191s, max: 0.053458, avg: 0.0148957s)

execute_sharding: 4.06235 s

finish_shard.sort: 0.522971s (count: 3, min: 0.126848s, max: 0.207576, avg: 0.174324s)

preprocessing: 2.55512 s

shard_final: 3.1731s (count: 3, min: 1.02136s, max: 1.10454, avg: 1.0577s)

[Other]

app: sharder


 === REPORT FOR sharder() ===

[Timings]

edata_flush: 0.38689s (count: 25, min: 0.010174s, max: 0.019995, avg: 0.0154756s)

execute_sharding: 0.65309 s

finish_shard.sort: 0.064785 s

preprocessing: 0.64888 s

shard_final: 0.518701 s

[Other]

app: sharder

ERROR:    stripedio.hpp(open_session:404): Could not open: time_smallnetflix.edata..Z.e16B.2_3_blockdir_1048576/13 session: 84 error: Too many open files

ERROR:    stripedio.hpp(open_session:404): Could not open: time_smallnetflix.edata..Z.e16B.1_3_blockdir_1048576/15 session: 85 error: Too many open files

Assertion failed: (rddesc>=0), function open_session, file ../../src/io/stripedio.hpp, line 406.

./unittest.sh: line 66:  2853 Abort trap: 6           ./toolkits/collaborative_filtering/als_tensor --training=time_smallnetflix --validation=time_smallnetflixe --lambda=0.065 --minval=1 --maxval=5 --max_iter=6 --K=27 --quiet=1 --clean_cache=1

Danny Bickson

unread,
Feb 4, 2014, 8:39:21 AM2/4/14
to graphchi-discuss, GraphLab Users
Hi Lance, 
You should increase the number of available open files using the shell command ulimit
for example "ulimit -n 10000" .

Let us know if this works for you!

Danny Bickson
Co-Founder
GraphLab Inc.


--
You received this message because you are subscribed to the Google Groups "graphchi-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to graphchi-discu...@googlegroups.com.
To post to this group, send email to graphchi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/graphchi-discuss/df6b8ca6-c68f-4241-ae77-1c2ee6030833%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Lance

unread,
Feb 4, 2014, 9:12:38 PM2/4/14
to graphchi...@googlegroups.com, GraphLab Users, bic...@graphlab.com
Hi Danny,

The errors presented above have been tackled successfully. However, the following errors prompted out :(

****************************************************************************

*******************ITEM-SIM-TO-RATING**********************

****************************************************************************

WARNING:  common.hpp(print_copyright:197): GraphChi Collaborative filtering library is written by Danny Bickson (c). Send any  comments or bug reports to danny....@gmail.com 

[training] => [./toolkits/collaborative_filtering/unittest/itemsim2rating.unittest.graph]

[similarity] => [./toolkits/collaborative_filtering/unittest/itemsim2rating.unittest.similarity]

[K] => [4]

[nshards] => [1]

[quiet] => [0]

[undirected] => [1]

[debug] => [1]

WARNING:  chifilenames.hpp(find_shards:271): Could not find shards with nshards = 1

WARNING:  chifilenames.hpp(find_shards:272): Please define 'nshards 0' or 'nshards auto' to automatically detect.

INFO:     sharder.hpp(start_preprocessing:370): Starting preprocessing, shovel size: 13107200

INFO:     io.hpp(convert_matrixmarket_and_item_similarity:391): Starting to read matrix-market input. Matrix dimensions: 5 x 4, non-zeros: 9

DEBUG:    io.hpp(convert_matrixmarket_and_item_similarity:426): Finished loading 9 ratings from file: ./toolkits/collaborative_filtering/unittest/itemsim2rating.unittest.graph

FATAL:    io.hpp(convert_matrixmarket_and_item_similarity:448): Item similarity to itself found for item 1 in line; 4

libc++abi.dylib: terminating with uncaught exception of type char*

./unittest.sh: line 85:  1663 Abort trap: 6           ./toolkits/collaborative_filtering/itemsim2rating --training=./toolkits/collaborative_filtering/unittest/itemsim2rating.unittest.graph --similarity=./toolkits/collaborative_filtering/unittest/itemsim2rating.unittest.similarity --K=4 execthreads 1 --nshards=1 --quiet=0 --undirected=1 --debug=1

I have no idea what' wrong this time again. 

在 2014年2月4日星期二UTC+8下午9时39分21秒,Danny Bickson写道:

Danny Bickson

unread,
Feb 4, 2014, 9:23:09 PM2/4/14
to graphchi-discuss, GraphLab Users
Hi Lance, 
This error means that in the item similarity file you had a similarity between item 2 to itself.
The similarity appears in line 4 (not counting matrix market headers, line counting starts from zero).
The allows similarities should be always from item to other items. 

Thanks,

Danny Bickson
Co-Founder
GraphLab Inc.


Lance

unread,
Feb 4, 2014, 9:50:00 PM2/4/14
to graphchi...@googlegroups.com, GraphLab Users, bic...@graphlab.com
Hi, that's all right! I just didn't expect to run into so many errors when I run "unittest.sh" in line with instructions. I will shift my test platform from Mac OS to Centos.

在 2014年2月5日星期三UTC+8上午10时23分09秒,Danny Bickson写道:
Reply all
Reply to author
Forward
0 new messages