Backup and restore operator (repo manager) to new k8s cluster

Problem:
We are trying to migrate a pulp instance to a new k8s cluster. It seems current backup and restore managers assume the same cluster, IE the restore manager expects a backup manager instance to already be running. But we want to restore on a new cluster.

I was thinking to just do

  1. deploy a new operator and manager instance as usual.
  2. pg_dump, rsync of pv to new node, export secrets & configmaps
  3. scale all repo-manager pods to 0
  4. restore all db, secrets, configmaps and data
  5. start it all up

So my question is… has anyone tried to move/restore a pulp instance to a new cluster? is there a better way than the one i described? can the current backup/restore managers be used for this?

Reason for migrate instead of recreate is that we have repo distributions that was synced at a specific time, if we resync later then it will mess up the release cycle of the machines…