Relay log task has unexpectedly terminated; logs may not be accessible

27 views
Skip to first unread message

Jan Daryl Yap

unread,
Sep 7, 2019, 11:53:08 PM9/7/19
to Tungsten Replicator Discuss
I added a node to my current setup 1 main and 2 nodes. So I have 1 main and 3 nodes now. But I got this error after I tried to do backup and restore from Main to my new node.
Thankyou for your help.




---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
NAME                     VALUE
----                     -----
appliedLastEventId     : NONE
appliedLastSeqno       : -1
appliedLatency         : -1.0
autoRecoveryEnabled    : false
autoRecoveryTotal      : 0
channels               : -1
clusterName            : ipbalamban
currentEventId         : NONE
currentTimeMillis      : 1567914058129
dataServerHost         : ipbalamban
extensions             :
host                   : ipbalamban
latestEpochNumber      : -1
masterConnectUri       : thl://localhost:/
masterListenUri        : thl://ipbalamban:2112/
maximumStoredSeqNo     : -1
minimumStoredSeqNo     : -1
offlineRequests        : NONE
pendingError           : Event extraction failed
pendingErrorCode       : NONE
pendingErrorEventId    : NONE
pendingErrorSeqno      : -1
pendingExceptionMessage: Relay log task has unexpectedly terminated; logs may not be accessible
pipelineSource         : UNKNOWN
relativeLatency        : -1.0
resourcePrecedence     : 99
rmiPort                : 10000
role                   : master
seqnoType              : java.lang.Long
serviceName            : ipbalamban
serviceType            : unknown
simpleServiceName      : ipbalamban
siteName               : default
sourceId               : ipbalamban
state                  : OFFLINE:ERROR
timeInStateSeconds     : 15.134
timezone               : GMT
transitioningTo        :
uptimeSeconds          : 17.71
useSSLConnection       : false
version                : Tungsten Replicator 3.0.1 build 64


Chris Parker

unread,
Sep 8, 2019, 11:17:36 AM9/8/19
to tungsten-repl...@googlegroups.com
How did you do the backup and restore?
Have you tried just putting the replicator online again?

You could stop the replicator and clear any THL on this new slave and online again - It’s possible THL didn’t transfer properly

If you can have the downtime on one of the existing slaves, I would recommend stopping the replicator and the database on a slave and copying that instead, that way you know it is “frozen” in time and be sure its at a known state



Chris Parker
Director, Professional Services EMEA & APAC
Continuent Ltd., a Delaware Corporation

https://www.continuent.com


--
You received this message because you are subscribed to the Google Groups "Tungsten Replicator Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tungsten-replicator...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tungsten-replicator-discuss/7e334527-d04d-408e-b978-c18e24139cdd%40googlegroups.com.

Jan Daryl Yap

unread,
Sep 8, 2019, 8:47:32 PM9/8/19
to Tungsten Replicator Discuss
Hello Chris, 
I did backup using trepctl backup from master and rsync it to the my new node. Then trectl restore it to the node. 
I also did put the replicator online again and removed the thl but the replicator is still offline and has the same error.
My existing slaves are all online except for the new slave. 
Thank you for your response..

On Sunday, September 8, 2019 at 11:17:36 PM UTC+8, Chris Parker wrote:
How did you do the backup and restore?
Have you tried just putting the replicator online again?

You could stop the replicator and clear any THL on this new slave and online again - It’s possible THL didn’t transfer properly

If you can have the downtime on one of the existing slaves, I would recommend stopping the replicator and the database on a slave and copying that instead, that way you know it is “frozen” in time and be sure its at a known state



Chris Parker
Director, Professional Services EMEA & APAC
Continuent Ltd., a Delaware Corporation

https://www.continuent.com


To unsubscribe from this group and stop receiving emails from it, send an email to tungsten-replicator-discuss+unsub...@googlegroups.com.

Chris Parker

unread,
Sep 9, 2019, 4:00:37 AM9/9/19
to tungsten-repl...@googlegroups.com
Hi,

Can you send through the log files from the tungsten/tungsten-replicator/log directory

Specifically trepsvc.log and if there is a log from the provisioning too?

I suspect something didn’t work with the backup based on the brief message in the status output.

Out of interest, as you have one master with additional slaves, have you considered clustering for better HA?

Chris Parker
Director, Professional Services EMEA & APAC
Continuent Ltd., a Delaware Corporation

https://www.continuent.com

To unsubscribe from this group and stop receiving emails from it, send an email to tungsten-replicator...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tungsten-replicator-discuss/40734c6f-20c6-4907-a420-da784450c875%40googlegroups.com.

Jan Daryl Yap

unread,
Sep 13, 2019, 6:15:43 AM9/13/19
to Tungsten Replicator Discuss
Hi Chris this is in my log file,

STATUS | wrapper  | 2019/09/07 14:25:25 | --> Wrapper Started as Daemon
STATUS | wrapper  | 2019/09/07 14:25:25 | Java Service Wrapper Community Edition 64-bit 3.5.17
STATUS | wrapper  | 2019/09/07 14:25:25 |   Copyright (C) 1999-2012 Tanuki Software, Ltd. All Rights Reserved.
STATUS | wrapper  | 2019/09/07 14:25:25 |     http://wrapper.tanukisoftware.com
STATUS | wrapper  | 2019/09/07 14:25:25 |
STATUS | wrapper  | 2019/09/07 14:25:25 | Launching a JVM...
INFO   | jvm 1    | 2019/09/07 14:25:26 | WrapperManager: Initializing...
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,651 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Tungsten Replicator 3.0.1 build 64
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,654 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Using default timezone : sun.util.calendar.ZoneInfo[id="Asia/Manila",offset=28800000,dstSavings=0,useDaylight=false,transitions=10,lastRule=null]
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,654 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Starting replication service manager
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,662 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Loading security information
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,700 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Compatibility note: Replicator time zone is set from services.properties and defaults to GMT
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,700 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Setting time zones via wrapper.conf -Duser.timezone option is deprecated
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,701 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Consult system documentation before making any changes to time zone-related settings
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,706 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Storing host time zone: id=Asia/Manila display name=Philippines Standard Time
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 14:25:26,706 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Setting replicator JVM time zone: id=GMT display name=Greenwich Mean Time
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 06:25:26,792 [ - WrapperSimpleAppMain] INFO  jmx.JmxManager JMXConnector: security.propoerties=/opt/continuent/releases/tungsten-replicator-3.0.1-64_pid4608/cluster-home/bin/../../cluster-home/conf/security.properties
INFO   | jvm 1    | 2019/09/07 14:25:26 |        use.authentication=false
INFO   | jvm 1    | 2019/09/07 14:25:26 |        use.tungsten.authenticationRealm.encrypted.password=true
INFO   | jvm 1    | 2019/09/07 14:25:26 |        use.encryption=false
INFO   | jvm 1    | 2019/09/07 14:25:26 | 2019-09-07 06:25:26,793 [ - WrapperSimpleAppMain] INFO  jmx.JmxManager JMXConnector started at address service:jmx:rmi://ipbalamban:10001/jndi/rmi://ipbalamban:10000/replicator
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:26,981 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Starting the internal/local replication service 'ipbalamban'
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:26,981 [ - WrapperSimpleAppMain] INFO  management.ReplicationServiceManager Starting replication service: name=ipbalamban
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:26,990 [ - WrapperSimpleAppMain] INFO  management.OpenReplicatorManager Configuring state machine for replication service: ipbalamban
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,279 [ - pool-2-thread-1] INFO  management.OpenReplicatorManager Replicator role: master
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,279 [ - pool-2-thread-1] INFO  management.OpenReplicatorManager Loading plugin: key=replicator.plugin.tungsten class name=com.continuent.tungsten.replicator.management.tungsten.TungstenPlugin
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,375 [ - pool-2-thread-1] INFO  management.OpenReplicatorManager Plug-in configured successfully: key=replicator.plugin.tungsten class name=com.continuent.tungsten.replicator.management.tungsten.TungstenPlugin
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,396 [ - pool-2-thread-1] INFO  conf.ReplicatorRuntime Replicator role: master
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,396 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting consistencyFailureStop to true
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,396 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting consistencyCheckColumnNames to true
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,396 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting consistencyCheckColumnTypes to true
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,396 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting applierFailurePolicy to stop
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,401 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting applierFailurePolicy to stop
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,401 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting replicator.applier.failOnZeroRowUpdate to warn
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,599 [ipbalamban - pool-2-thread-1] INFO  pipeline.Pipeline Configuring pipeline: master
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,600 [ipbalamban - pool-2-thread-1] INFO  datasource.DataSourceService Configuring data source: name=global
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,600 [ipbalamban - pool-2-thread-1] INFO  datasource.DataSourceManager Loading data source: name=global className=com.continuent.tungsten.replicator.datasource.SqlDataSource
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,637 [ipbalamban - pool-2-thread-1] INFO  datasource.AbstractDataSource Using predefined csvType: name=mysql
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,637 [ipbalamban - pool-2-thread-1] INFO  datasource.AbstractDataSource Checking CSV formatter class: com.continuent.tungsten.replicator.csv.DefaultCsvDataFormat
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,637 [ipbalamban - pool-2-thread-1] INFO  datasource.DataSourceService Configuring data source: name=extractor
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,638 [ipbalamban - pool-2-thread-1] INFO  datasource.DataSourceManager Loading data source: name=extractor className=com.continuent.tungsten.replicator.datasource.AliasDataSource
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,652 [ipbalamban - pool-2-thread-1] INFO  pipeline.StageTaskGroup Instantiating and configuring tasks for stage: binlog-to-q
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,693 [ipbalamban - pool-2-thread-1] INFO  extractor.ExtractorWrapper Configuring raw extractor and heartbeat filter
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,694 [ipbalamban - pool-2-thread-1] INFO  extractor.mysql.MySQLExtractor Reading logs from MySQL master: binlogMode= master
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,694 [ipbalamban - pool-2-thread-1] INFO  extractor.mysql.MySQLExtractor Using relay log directory as source of binlogs: /opt/continuent/relay/ipbalamban
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,694 [ipbalamban - pool-2-thread-1] INFO  event.EventMetadataFilter Use default schema for unknown SQL statements: false
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,694 [ipbalamban - pool-2-thread-1] INFO  extractor.ExtractorWrapper Master auto-repositioning on source_id change is enabled; extractor will reposition current log position if last extracted source_id differs from current source_id
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,699 [ipbalamban - pool-2-thread-1] INFO  pipeline.StageTaskGroup Instantiating and configuring tasks for stage: q-to-thl
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,705 [ipbalamban - pool-2-thread-1] INFO  management.OpenReplicatorManager Sent State Change Notification START -> OFFLINE:NORMAL
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,705 [ - WrapperSimpleAppMain] INFO  management.OpenReplicatorManager Replicator auto-enabling is engaged; going online automatically
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,706 [ipbalamban - pool-2-thread-1] INFO  pipeline.Pipeline Releasing pipeline: master
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,706 [ipbalamban - pool-2-thread-1] INFO  pipeline.StageTaskGroup Releasing tasks for stage: binlog-to-q
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,706 [ipbalamban - pool-2-thread-1] INFO  extractor.ExtractorWrapper Releasing raw extractor and heartbeat filter
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,706 [ipbalamban - pool-2-thread-1] INFO  pipeline.StageTaskGroup Releasing tasks for stage: q-to-thl
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,714 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Replicator role: master
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,715 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting consistencyFailureStop to true
INFO   | jvm 1    | 2019/09/07 14:25:27 | 2019-09-07 06:25:27,715 [ipbalamban - pool-2-thread-1] INFO  conf.ReplicatorRuntime Setting consistencyCheckColumnNames to true


On Monday, September 9, 2019 at 4:00:37 PM UTC+8, Chris Parker wrote:
Hi,

Can you send through the log files from the tungsten/tungsten-replicator/log directory

Specifically trepsvc.log and if there is a log from the provisioning too?

I suspect something didn’t work with the backup based on the brief message in the status output.

Out of interest, as you have one master with additional slaves, have you considered clustering for better HA?

Chris Parker
Director, Professional Services EMEA & APAC
Continuent Ltd., a Delaware Corporation

https://www.continuent.com

Chris Parker

unread,
Sep 13, 2019, 8:54:28 AM9/13/19
to tungsten-repl...@googlegroups.com
Hi

The details in the log implying this node is coming up as a master, not as a slave

This suggests that perhaps the configuration is incorrect

When you rsyncd did you only copy the database files or did you also copy the tungsten installation? 



Chris Parker
Director, Professional Services EMEA & APAC
Continuent Ltd., a Delaware Corporation

https://www.continuent.com
To unsubscribe from this group and stop receiving emails from it, send an email to tungsten-replicator...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tungsten-replicator-discuss/85e8f331-24e5-4bff-b0fd-ee4ef86dfcf4%40googlegroups.com.

金炯杰

unread,
Sep 13, 2019, 10:02:02 AM9/13/19
to tungsten-replicator-discuss
请不要再发我邮件了好吗。都一年多了

--------------原始邮件--------------
发件人:"Chris Parker "<chris....@continuent.com>;
发送时间:2019年9月13日(星期五) 晚上8:54
收件人:"tungsten-replicator-discuss" <tungsten-repl...@googlegroups.com>;
主题:Re: [tungsten-replicator-discuss] Relay log task has unexpectedly terminated; logs may not be accessible
-----------------------------------
Reply all
Reply to author
Forward
0 new messages