1. Tutorial (
http://neo4j.com/docs/3.0.1/ha-setup-tutorial.html ) is somewhat misleading. Thanks to Dave from the neo4j's team for the info: you NEED to set
ha.host.coordination
and
ha.host.data
I'm in AWS. In my config these values are set to Internal instance IP, with 5001 port for coordination and 6001 for host.data
2. Another part of my problem: deployment flow should be clarified in the docs.
While testing the setup, my config manager (Ansible) was doing a basic installation, then I was logging to each instance and adjusting configs manually.
This means, that my instances were initially launches in SINGLE mode each.
And maybe I am wrong, but I got impression that you cannot just switch to HA mode.
So i cleanly wiped out neo4j from the servers, updated Ansible playbooks to prepare and upload full HA config before instances launched, so very first launch was reading HA-specific config.
That helped!
I do see some issues in the browser's console + there's a bug with Browser Sync not remembering my authentication, but at least I got my servers running.
HA part of my config:
#*****************************************************************
# HA configuration
#*****************************************************************
# Uncomment and specify these lines for running Neo4j in High Availability mode.
# See the High availability setup tutorial for more details on these settings
# Database mode
# Allowed values:
# HA - High Availability
# SINGLE - Single mode, default.
# To run in High Availability mode uncomment this line:
dbms.mode=HA
# ha.server_id is the number of each instance in the HA cluster. It should be
# an integer (e.g. 1), and should be unique for each cluster instance.
ha.server_id={ 1 }
# ha.initial_hosts is a comma-separated list (without spaces) of the host:port
# where the ha.host.coordination of all instances will be listening. Typically
# this will be the same for all cluster instances.
ha.initial_hosts={ Internal IP of server 1 }:5001,{ Internal IP of server 2 }:5001,{ Internal IP of server 3 }:5001
# IP and port for this instance to listen on, for communicating cluster status
# information iwth other instances (also see ha.initial_hosts). The IP
# must be the configured IP address for one of the local interfaces.
ha.host.coordination={ Internal IP of server 1 }:5001
# IP and port for this instance to listen on, for communicating transaction
# data with other instances (also see ha.initial_hosts). The IP
# must be the configured IP address for one of the local interfaces.
ha.host.data={ Internal IP of server 1 }:6001
# The interval at which slaves will pull updates from the master. Comment out
# the option to disable periodic pulling of updates. Unit is seconds.
ha.pull_interval=10
# Amount of slaves the master will try to push a transaction to upon commit
# (default is 1). The master will optimistically continue and not fail the
# transaction even if it fails to reach the push factor. Setting this to 0 will
# increase write performance when writing through master but could potentially
# lead to branched data (or loss of transaction) if the master goes down.
#ha.tx_push_factor=1
# Strategy the master will use when pushing data to slaves (if the push factor
# is greater than 0). There are three options available "fixed_ascending" (default),
# "fixed_descending" or "round_robin". Fixed strategies will start by pushing to
# slaves ordered by server id (accordingly with qualifier) and are useful when
# planning for a stable fail-over based on ids.
#ha.tx_push_strategy=fixed_ascending
# Policy for how to handle branched data.
#ha.branched_data_policy=keep_all
# How often heartbeat messages should be sent. Defaults to ha.default_timeout.
#ha.heartbeat_interval=5s
# Timeout for heartbeats between cluster members. Should be at least twice that of ha.heartbeat_interval.
#ha.heartbeat_timeout=11s
# If you are using a load-balancer that doesn't support HTTP Auth, you may need to turn off authentication for the
# HA HTTP status endpoint by uncommenting the following line.
#dbms.security.ha_status_auth_enabled=false
# Whether this instance should only participate as slave in cluster. If set to
# true, it will never be elected as master.
#ha.slave_only=false
Config is almost identical between instances, the only differences are:
- server_id,
- ha.host.coordination host,
- ha.host.data host
( they carry IP of the "current" instance )
Hope this helps!
Dennis