Dear DSpace Community Team,
We are currently deploying DSpace 9.1 in a horizontally scaled environment using Kubernetes on GCP.
Our current architecture includes:
* Multiple DSpace 9.1 backend application pods
* Angular/UI frontend deployment
* Kubernetes ingress/load balancer
* Shared database
* Shared file/object storage
While scaling the application horizontally, we are facing login/session-related issues. Since user sessions appear to be maintained in-memory within individual pods, requests routed to different pods are causing session inconsistencies, unexpected logouts, and authentication issues.
We also observed that frontend requests routed through the ingress/load balancer may hit different backend pods during authentication and subsequent API calls, which appears to impact session continuity.
We would like to understand the recommended production-ready approach for session management in DSpace 9.1 when running multiple pods.
We are currently evaluating the following options:
* Sticky Sessions at ingress/load balancer level
* Redis-based distributed sessions
* Database-backed sessions
* JWT/token-based authentication
* SSO/OAuth integration
Could you please provide guidance on:
1. Recommended architecture for horizontal scaling in DSpace 9.1
2. Best practices for frontend-to-backend session handling in Kubernetes environments
3. Best practices for session management across pods
4. Whether Redis/session replication is officially recommended
5. Any known limitations or reference implementations for Kubernetes deployments
6. Suggestions for production-grade HA deployments
Any documentation, community references, or implementation examples would be greatly appreciated.
Thank you for your support and guidance.