Hello. I'm trying to configure a 3 instance replicaset for the Ops Manager application database. The 3 databases are in separate VMs and created successfully, I installed manually using the tarball. I can connect to each through the mongo shell. In db1 I created a root administrator so I could run rs.initiate. The initiate command looks like this:
The settings in the config files look like this:
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongodb1/mongod.log
# Where and how to store data.
storage:
dbPath: /data/mongodb/mongodb1
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 3000
bindIp: 127.0.0.1 testmongoops1.qsroute66.com testmongoops2.qsroute66.com testmongoops3.qsroute66.com
security:
# authorization: "enabled"
keyFile: "/etc/mongodb/opsMgrDb.kf"replication:
replSetName: "opsMgrRs"
#sharding:
All hosts are reachable and hostnames are resolvable via DNS. The first error I was getting was:
"errmsg" : "No host described in new configuration 1 for replica set opsMgrRs maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
Then I added testmongoops1 and
testmongoops1.db.com to the hosts file's 127.0.0.1 line and the above error went away and I got a new error:
Some questions:
Why would I have to add the hostname to /etc/hosts when the names are resolvable via DNS?
Do I need to create the root user on instances 2 and 3?
Do I need to add all of the hostnames to all 3 host files?
Any help will be greatly appreciated.
Thanks,
Mark