Why ATOM containerized setup is not ready for production yet?

23 views
Skip to first unread message

Francisco Schwertner

unread,
Jan 26, 2026, 5:28:18 PM (2 days ago) Jan 26
to AtoM Users
Hi

I just want to know why ATOM Containerized setup is not recommended fo production enviroments?

Is there any issue related to running ATOM on kubernetes?

tks

pieters...@gmail.com

unread,
Jan 27, 2026, 6:07:22 AM (yesterday) Jan 27
to ica-ato...@googlegroups.com

 

Hi

AtoM (Access to Memory) containerized setups—typically deployed using Docker Compose—are generally not recommended for production environments because they lack the high availability, automated orchestration, and data management robustness required for live archival systems. 

While excellent for development, testing, or quick prototyping, a basic containerized deployment introduces several risks and complexities that make it unsuitable for production: 

1. Lack of High Availability and Self-Healing 

  • No Automatic Failover: If the Docker host goes down or a container crashes, Docker Compose does not automatically reschedule the containers on a new node.
  • Manual Scaling/Recovery: Scaling or recovering from a crash is manual, resulting in higher downtime compared to production-grade orchestration platforms like Kubernetes. 

2. Complex Data Persistence and State Management

  • Stateful Nature of Databases: AtoM requires a database (MySQL) to hold archival descriptions, authority records, and user data. Containers are inherently designed to be stateless.
  • Data Loss Risk: While Docker Volumes can persist data, managing persistent storage across containers in a production environment is complex and requires specialized knowledge to avoid data inconsistency or loss. 

3. Security Vulnerabilities

  • Shared Kernel Risks: Containers share the host operating system kernel. A vulnerability in the host kernel can compromise all containers running on it.
  • Hardcoded Credentials: Poorly configured containers might have database passwords or API keys embedded in the image, creating security risks. 

4. Networking Complexity

  • Difficult Configuration: Networking between containers and with external services can be tricky in production, particularly when attempting to keep containers isolated while maintaining high-performance, secure connections. 

5. Increased Management Overhead 

  • Monitoring Challenges: Monitoring individual containers for health and performance is more difficult than monitoring traditional virtual machines.
  • Configuration Management: Managing configurations across several interconnected containers (web server, database, worker processes) adds complexity, particularly for updates. 

Conclusion: For a reliable, production-ready AtoM deployment, a native installation (using virtual machines, Bare Metal, or a managed Kubernetes service) is often preferred to ensure high availability, easier backup management, and better security. 

 

 

Groete / Regards

Johan Pieterse

082 337-1406

--
You received this message because you are subscribed to the Google Groups "AtoM Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ica-atom-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ica-atom-users/d97f5b7d-c7d8-4b45-9f66-c6abb9df0ecbn%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages