I want to post my progress.
NOTE: This is just a status update. There are no questions asked in this posting.
First of all, A
rthur Ingram sent me a link to this: https://github.com/RamSailopal/YottaDB-DR
It outlines how to set up docker containers / instances all ready to go to test replications. VERY HELPFUL!
There were a few snags that chatGPT got me through.
First, I had to add my user to a docker group.
sudo usermod -aG docker $USER
newgrp docker
I then created a script for launching the docker instance:
kdt0p@remoteserver:~/kill_later/YottaDB-DR$ cat startInstGeneric.sh
#!/bin/bash
#
# startInstGeneric.sh
#
# Usage:
# ./startInstGeneric.sh instA
# ./startInstGeneric.sh instB
#
# Purpose:
# Launch a YottaDB Docker demo container (instA or instB).
# Ensures the required Docker network exists.
#
set -e
# ----- Configuration -----
REPO_ROOT="/home/kdt0p/kill_later/YottaDB-DR"
YOTTA_INIT_DIR="${REPO_ROOT}/yotta-init"
IMAGE="
docker.io/yottadb/yottadb-base"
DOCKER_NET="yotta-dr"
# ----- Argument validation -----
if [ $# -ne 1 ]; then
echo "Usage: $0 {instA|instB}"
exit 1
fi
INSTANCE="$1"
case "$INSTANCE" in
instA|instB) ;;
*)
echo "Error: instance must be instA or instB"
exit 1
;;
esac
# ----- Sanity checks -----
if [ ! -d "$YOTTA_INIT_DIR" ]; then
echo "Error: missing directory: $YOTTA_INIT_DIR"
exit 1
fi
# ----- Ensure Docker network exists -----
if ! docker network inspect "$DOCKER_NET" >/dev/null 2>&1; then
echo "Creating Docker network: $DOCKER_NET"
docker network create "$DOCKER_NET"
fi
# ----- Launch container -----
cd "$REPO_ROOT"
# Remove any existing container with this name
docker rm -f "$INSTANCE" 2>/dev/null || true
docker run \
--name "$INSTANCE" \
--hostname "$INSTANCE" \
--network "$DOCKER_NET" \
--rm \
-v "${YOTTA_INIT_DIR}:/home/yotta" \
-w /home/yotta \
-it \
"$IMAGE" \
/bin/bash
kdt0p@remoteserver:~/kill_later/YottaDB-DR$
Then wrapper scripts to call into this:
kdt0p@remoteserver:~/kill_later/YottaDB-DR$ cat startInstA.sh
#!/bin/bash
/home/kdt0p/kill_later/YottaDB-DR/startInstGeneric.sh "instA"
kdt0p@remoteserver:~/kill_later/YottaDB-DR$ cat startInstB.sh
#!/bin/bash
/home/kdt0p/kill_later/YottaDB-DR/startInstGeneric.sh "instB"
One needs to have two shell windows, one for instA and another for instB.
Next, inside the container, looking at the files provided, one sees:
drwxr-xr-x 2 kdt0p kdt0p 4096 Jan 28 16:29 .
drwxr-xr-x 4 kdt0p kdt0p 4096 Jan 28 12:53 ..
-rwxr-xr-x 1 kdt0p kdt0p 4033 Jan 28 16:08 master.sh
-rwxr-xr-x 1 kdt0p kdt0p 3581 Jan 28 16:12 slave.sh
-rwxr-xr-x 1 kdt0p kdt0p 162 Jan 26 12:28 yottainit
-rwxr-xr-x 1 root root 182 Jan 26 11:31 yottainit.original
We tweaked some of the paths, with final working (for me) results of
kdt0p@remoteserver:~/kill_later/YottaDB-DR/yotta-init$ cat yottainit
source /opt/yottadb/current/ydb_env_set
export ydb_dir="/data"
export ydb_repl_instance="/data/yottadb.repl"
export ydb_gbldir="/data/r2.02_x86_64/g/yottadb.gld"
Next, we expanded the master.sh script to include lots of comments
root@instA:/home/yotta# cat master.sh
#!/bin/bash
#
# master.sh
#
# Purpose:
# Initialize and start the YottaDB replication SOURCE (primary) instance.
# This script is intended to be run inside the instA container.
#
# Load local environment customizations:
# - Sets ydb_gbldir
# - Sets ydb_repl_instance (replication instance file path)
# - Sets ydb_dir and any other per-demo variables
#
# This file is bind-mounted from the host and shared between containers.
source /home/yotta/yottainit
# Load the standard YottaDB environment:
# - Defines ydb_dist
# - Sets PATH, library paths, etc.
#
# This ensures mupip and ydb binaries behave correctly.
source /opt/yottadb/current/ydb_env_set
# Enable replication at the database/region level.
#
# -replication=on
# Marks the database headers as replication-capable.
#
# -region "*"
# Applies this setting to ALL regions in the global directory.
#
# This must be done before starting any replication source servers.
# It requires standalone access if replication was previously off.
/opt/yottadb/current/mupip set -replication=on -region "*"
# Create (or recreate) the replication instance file for this instance.
#
# -instance_create
# Initializes the replication instance metadata file.
#
# -name=instA
# Sets the immutable replication instance name for this node.
#
# The instance name is how other nodes (e.g., instB) identify this source.
#
# //kt NOTES: This really should only be done ONCE, not every time the server starts up
/opt/yottadb/current/mupip replicate -instance_create -name=instA
# Start the replication SOURCE server.
#
# -source -start
# Launches the source replication process.
#
# -instsecondary=instB
# Logical name of the secondary instance this source will feed.
#
# -secondary=instB:4001
# Network endpoint (hostname:port) of the receiver on the secondary.
#
# -buffsize=1048576
# Size (in bytes) of the replication buffer used for data transfer.
#
# -log=/root/A_B.log
# Log file for source server activity and diagnostics.
#
# After this command succeeds, the source server will actively stream
# journaled updates to the secondary receiver.
#
# //kt NOTES: need to manage log files.
#
#What goes into the replication log
#--Startup/shutdown events
#--Connection status
#--Errors and warnings
#--Some informational messages
#
#It is not per-transaction logging (that’s journaling), but it can still grow over time.
#
#Consider use as follows:
#Instead of /root, use something like:
#
#-log=/opt/worldvista/EHR/log/repl_source.log
#
#
#Create a file:
#
#sudo nano /etc/logrotate.d/yottadb-repl
#
#Put this in it:
#
#/opt/worldvista/EHR/log/repl_*.log {
# size 10M
# rotate 5
# compress
# delaycompress
# copytruncate
# missingok
# notifempty
#}
#
#What each line means (plain English)
#
#size 10M
#→ Rotate when the file reaches 10 MB (not daily/weekly)
#
#rotate 5
#→ Keep 5 old versions, delete anything older
#→ Worst-case disk usage ≈ 50 MB per log
#
#compress
#→ Older logs are gzip’d
#
#delaycompress
#→ Don’t compress the most recent rotated file (safer for debugging)
#
#copytruncate
#→ Critical: keeps the replication process running
#(logrotate copies the file, then truncates the original)
#
#missingok
#→ Don’t error if the log doesn’t exist yet
#
#notifempty
#→ Don’t rotate empty logs
/opt/yottadb/current/mupip replicate \
-source -start \
-instsecondary=instB \
-secondary=instB:4001 \
-buffsize=1048576 \
-log=/root/A_B.log
Next we expanded the slave.sh script to include lots of comments:
root@instB:/home/yotta# cat slave.sh
#
# slave.sh
#
# Purpose:
# Initialize and start the YottaDB replication SECONDARY (standby) instance.
# This script is intended to be run inside the instB container.
#
# In steady state:
# - This node RECEIVES and APPLIES updates from the primary (instA).
#
# For future switchover:
# - A PASSIVE source server is also started, so this node can be
# promoted quickly to primary without reconfiguration.
#
# Load local environment customizations:
# - ydb_gbldir (global directory)
# - ydb_repl_instance (replication instance file path)
# - ydb_dir (base directory)
#
# This file is bind-mounted from the host.
source /home/yotta/yottainit
# Load the standard YottaDB runtime environment:
# - defines ydb_dist
# - sets PATH, library paths, etc.
#
# Ensures mupip/ydb commands work correctly.
source /opt/yottadb/current/ydb_env_set
# Enable replication in the database headers.
#
# -replication=on
# Marks all regions as replication-capable.
#
# -region "*"
# Applies to all regions in the global directory.
#
# This is required before receiver/update processes can run.
# In production, this is usually done once and not on every boot.
/opt/yottadb/current/mupip set -replication=on -region "*"
# Create the replication instance file for this node.
#
# -instance_create
# Initializes replication metadata for this instance.
#
# -name=instB
# Sets the immutable replication instance identity for this node.
#
# -noreplace
# IMPORTANT SAFETY OPTION:
# - If the instance file already exists, do NOT rename or recreate it.
# - This makes the command safe to run repeatedly in a demo environment.
#
# In production, instance creation is normally a one-time operation.
/opt/yottadb/current/mupip replicate -instance_create -name=instB -noreplace
# Start a PASSIVE replication source server.
#
# -source -start -passive
# Starts a source server that does NOT actively send data.
# It is "pre-staged" and ready to be activated later.
#
# Why this exists:
# - Allows fast promotion of this node to PRIMARY during a switchover.
# - Avoids rebuilding replication state during a role change.
#
# -instsecondary=dummy
# Required syntactically, but unused in passive mode.
# (No real secondary is contacted.)
#
# -buffsize=1048576
# Size of the replication buffer (1 MiB).
#
# -log=/root/repl_source.log
# Log file for this (passive) source server.
#
# NOTE:
# This source server must remain PASSIVE while the receiver/update
# processes are running on this node.
/opt/yottadb/current/mupip replicate \
-source -start -passive \
-instsecondary=dummy \
-buffsize=1048576 \
-log=/root/repl_source.log
# Start the replication RECEIVER server.
#
# -receive -start
# Listens for incoming replication data from the primary.
#
# -listenport=4001
# TCP port on which this node accepts replication connections.
#
# -buffsize=1048576
# Size of the receive buffer (1 MiB).
#
# -log=/root/repl_receive.log
# Log file for receiver and update activity.
#
# When this runs successfully:
# - The receiver process accepts data from the primary.
# - The update process applies journaled changes to the database.
/opt/yottadb/current/mupip replicate \
-receive -start \
-listenport=4001 \
-buffsize=1048576 \
-log=/root/repl_receive.log
# Perform a health check on the receiver/update processes.
#
# Confirms that:
# - Receiver server is running
# - Update process is alive
#
# Useful both interactively and in scripts.
/opt/yottadb/current/mupip replicate -receive -checkhealth
root@instB:/home/yotta#
After running master.sh script in the instA container, and slave.sh in the instB container, I can set a value in instA ...
root@instA:/home/yotta# source yottainit
root@instA:/home/yotta# ydb -direct
YDB>set ^tmp("x")=1
YDB>
and find that value has been replicated into instB
root@instB:/home/yotta# ydb -direct
YDB>zwr ^tmp(*)
^tmp("x")=1
YDB>
So, I'm making progress!
Next, as per recommendation in the acculturation guide, I am going to work on crashing one machine and trying to recover from the backup.
Thanks
Kevin T