Problem:
We are trying to migrate a pulp instance to a new k8s cluster. It seems current backup and restore managers assume the same cluster, IE the restore manager expects a backup manager instance to already be running. But we want to restore on a new cluster.
I was thinking to just do
- deploy a new operator and manager instance as usual.
- pg_dump, rsync of pv to new node, export secrets & configmaps
- scale all repo-manager pods to 0
- restore all db, secrets, configmaps and data
- start it all up
So my question is… has anyone tried to move/restore a pulp instance to a new cluster? is there a better way than the one i described? can the current backup/restore managers be used for this?
Reason for migrate instead of recreate is that we have repo distributions that was synced at a specific time, if we resync later then it will mess up the release cycle of the machines…