I am currently migrating a Solace HA setup from a VM-based environment to a Kubernetes cluster.
In the original setup, I ran three VMs, each hosting a Solace instance via Podman in an HA configuration. Each instance had its own local storage-group directory. To transition this to Kubernetes, I have built a K8s cluster across these three VMs and deployed the Solace HA using the PubSub+ Operator directly (without using OLM).
During this process, I attempted to reuse the existing storage-group data from the Podman environment by mounting them as Persistent Volumes (PVs) for the K8s pods. However, the configuration is not working as expected.
My specific questions are:
Is it fundamentally possible to reuse the storage-group data from a VM/Podman-based environment directly in a Kubernetes Operator-managed environment?
If so, what is the correct methodology or procedure to ensure the data is recognized properly by the Operator?
I initially assumed that simply mapping the existing storage-group directories would suffice, but I am encountering issues and would appreciate your technical guidance on the proper migration path.
Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:
Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.
AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a . If my response doesn’t make sense for your topic, please react to it with a .
Based on the provided documentation, I’ll explain how to migrate a software event broker HA setup from VM-based Podman containers to Kubernetes while preserving the storage-group data.
The migration process requires careful consideration of several key aspects:
High-Level Migration Overview:
The migration involves moving configuration and message delivery state to new Kubernetes hosts while maintaining service availability in most cases
The process requires both old and new instances to be online simultaneously for configuration and state transfer
The migration must be done node by node in a specific order: monitor, standby, and finally active node
Key Prerequisites:
The redundancy configuration must use stable network identity (DNS updates recommended)
Storage containing broker state must be transferable to the new Kubernetes environment
Both source and target environments must be running the same version of the Solace Event Broker (formerly called PubSub+ Event Broker)
Important Considerations:
Redundancy must be properly configured using DNS to allow seamless node transitions
Storage-group data transfer is critical for maintaining message state and configuration
Network identity and connectivity between nodes must be maintained throughout migration
This document provides step-by-step instructions for:
Preparing the target Kubernetes environment
Migrating each node in the correct sequence
Transferring storage-group data
Verifying redundancy after migration
Managing DNS updates
Handling configuration synchronization
Due to the complexity and criticality of this migration, I strongly recommend following the official documentation rather than attempting to summarize the detailed steps here, as any omission or misinterpretation could lead to service disruption or data loss.