I’m doing just this.
We currently have 5 pulp instances across the globe, with ~6TB of content, predominantly RPM based, but some file, and soon we are liable to have containers.
As has been stated pulp has no native support for this, so we built our own tooling using their Python API client libraries.
For us, as we needed to write our own tooling to deal with our release process supporting multiple installations wasn’t too much overhead.
We have a separate DB that stores a list of pulp servers and repositories we want to manage.
When we need to create a new repository to mirror (or just internally managed ones), the script will add the DB entries, and then go around the pulp instances creating said repositories. The primary node will point upstream, the secondaries will point back to the primary.
When adding/removing content, we do everything on the primary first, and then the tooling will ensure that the secondaries are kept up to date.
It works reasonably well.
I’m curious to investigate alternate options, that make better use of cloud-based object storage, that can replicate, and then locally-based pulp content nodes that point to said replicated object storage, however, I don’t have the time to invest in that at the moment (I don’t know if it would even work).