Re: [Aurelius] Exception : A Titan graph with the same instance id is already open.

769 views
Skip to first unread message

Matthias Broecheler

unread,
Nov 12, 2014, 9:06:12 PM11/12/14
to aureliu...@googlegroups.com
This indicates that you have or had a graph on the same machine with the same local identifier open before. Since those instance ids are important to Titan, Titan now checks for uniqueness.

Most likely, this is left over from a previous instance that was not properly shut down.
If you are trying to run multiple TItan instances on the same machine, then you need to specify a unique machine id for each of them.

HTH,
Matthias

On Sun, Nov 9, 2014 at 2:26 AM, manish kumar <mkj.o...@gmail.com> wrote:
I am getting this exception even if i restart my linux system.

Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'instanceInitializer' defined in ServletContext resource [/WEB-INF/spring/instanceInitializer.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.migific.server.instancefactory.InstanceInitializer]: Constructor threw exception; nested exception is com.thinkaurelius.titan.core.TitanException: A Titan graph with the same instance id [7f0001013235-manish-Vostro-25201] is already open. Might required forced shutdown.
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:278)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1133)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1036)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:505)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:229)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:298)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:1081)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1006)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:904)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:527)
... 55 more
Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.migific.server.instancefactory.InstanceInitializer]: Constructor threw exception; nested exception is com.thinkaurelius.titan.core.TitanException: A Titan graph with the same instance id [7f0001013235-manish-Vostro-25201] is already open. Might required forced shutdown.
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:163)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:125)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:270)
... 67 more
Caused by: com.thinkaurelius.titan.core.TitanException: A Titan graph with the same instance id [7f0001013235-manish-Vostro-25201] is already open. Might required forced shutdown.
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:129)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:92)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:60)
at com.migific.server.instancefactory.InstanceInitializer.initTitanGraph(InstanceInitializer.java:79)
at com.migific.server.instancefactory.InstanceInitializer.<init>(InstanceInitializer.java:46)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(N

--
You received this message because you are subscribed to the Google Groups "Aurelius" group.
To unsubscribe from this group and stop receiving emails from it, send an email to aureliusgraph...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/aureliusgraphs/b599b857-1095-4312-a231-d03b4d01d5b0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Matthias Broecheler
http://www.matthiasb.com

Giuseppe Profiti

unread,
Jan 20, 2015, 3:21:33 PM1/20/15
to aureliu...@googlegroups.com
I'm sorry if adding a reply to a 3 months old post may be strange, but I have a similar problema and the terminology here is confusing.
I have a single titan server running on a machine, then on the same machine I start a couple of java applications that perform operations on a graph hosted by that titan server.
Sometimes I get the "same instance id" exception, most of the time I do not.
Since the documentation for TitanGraph.shutdown() says that
"Closing the graph database causes a disconnect and possible closing of the underlying storage backend and a release of all occupied resources by this graph database. Closing a graph database requires that all open thread-independent transactions have been closed - otherwise they will be left abandoned."

and since I have multiple applications accessing the graph, it seemed not applicable. Then, is there any method I can use to avoid the same idproblem? Is there a configuation setting for the server? Do I have to set something into the application?

Thanks,
Giuseppe

Matthias Broecheler

unread,
Jan 22, 2015, 3:13:15 PM1/22/15
to aureliu...@googlegroups.com
When your applications are all interacting with just one Titan server instance, the problem should not really arise unless the server is - in certain cases - not shut down properly.


For more options, visit https://groups.google.com/d/optout.

Giuseppe Profiti

unread,
Jan 23, 2015, 7:44:48 AM1/23/15
to aureliu...@googlegroups.com
Thanks for the reply, however this leads to the question "how to shut id down properly?". Do I need to use the shutdown() method?
How this can affect other concurrent applications?

Best,
Giuseppe

Matthias Broecheler

unread,
Jan 26, 2015, 3:01:35 PM1/26/15
to aureliu...@googlegroups.com
Yes, please use shutdown. It does not affect other applications if you have a remote connection to the storage backend since it only closes the Titan instance.


For more options, visit https://groups.google.com/d/optout.

sanjana....@gmail.com

unread,
Aug 7, 2017, 9:51:45 AM8/7/17
to Aurelius
Sorry for reviving after two and half years because I ran into serious problems with this...

I have multiple applications that .open the graph with same configuration. It all works except when one server dies because of problems. When this happens, the graph does not get shutdown(). And when the server comes up and tries to reconnect, the it gets the exception as stated in above posts.

The way I see is there are two solutions that come with their own problems:

1) Get the instance id somehow and register it in some database and use it to shutdown with management. But problem I see is, management is available only after graph is available for which I have to do .open() and the above problem kicks in.

2) There can be a way to specify instance ID that can be cleaned up with management if necessary. This means that each application can have their own instance ID working on same underlying graph but independent of each other.

The second is better because it solves all issues and management is easy. For each server with separate threads that create there own instances, I can keep a database registry of server-id+instance-id combinations and when server goes down the new one will clean up old one's instances.

Titan has been forked to Janus and that too seems to have same inherited issues.

If it can be done differently from what I stated, please let me know. That will be very helpful.

Ram

Taras Stasyuk

unread,
Aug 15, 2017, 10:37:25 AM8/15/17
to Aurelius
Hi,

we resolved it with option 2)

In general:

Create global config option
public static final ConfigOption<Instant> TITAN_HEART_BEAT_TIME = new ConfigOption<Instant>(OWN_NS, "heart-beat-time",
"is automatically updated every 10s. If not than node is dead.", ConfigOption.Type.GLOBAL, Instant.class).hide(); 

every 10s  application which opened graph updates that option 
globalConfig.set(TITAN_HEART_BEAT_TIME, Instant.now(), uniqueGraphId);


Now on startup of new instance you may close graph instances with old heart-beat-time.


BR,
Taras
Reply all
Reply to author
Forward
0 new messages