I think I successfully created a mongodb replica set with an arbiter. When I run rs.status() on the primary, I get the following:
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2013-10-23T15:11:27Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 71210,
"optime" : Timestamp(1382539219, 1),
"optimeDate" : ISODate("2013-10-23T14:40:19Z"),
"self" : true
},
{
"_id" : 1,
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2362,
"optime" : Timestamp(1382539219, 1),
"optimeDate" : ISODate("2013-10-23T14:40:19Z"),
"lastHeartbeat" : ISODate("2013-10-23T15:11:26Z"),
"lastHeartbeatRecv" : ISODate("2013-10-23T15:11:26Z"),
"pingMs" : 1,
},
{
"_id" : 2,
"name" : "mongodb2.iin:30000",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"lastHeartbeat" : ISODate("2013-10-23T15:11:26Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0
}
],
"ok" : 1
Towards the bottom, the arbiter is showing as stateStr : not reachable/healthy. Is that normal?
I plan on testing the replica set by simulating a failover process by shutting down the primary and see it failover. I would shut down the primary and then log into the slave and it should eventually be the primary, correct? What's the best practice in having the MongoDB automatically startup upon a reboot? I'm using a config file so I would need to add the command "mongod --config /etc/mongodb.conf" to a bash script and execute on boot?