Thanks, yes…
we are trying to synchronise multiple OS repositories and wanted to put them in sort of a loop and export the content out to the filesystem when done, and wanted to run all of this in parallel…
for example:
remote 1 → repo1 → auto_publish → distribute_1 → export rpm units to fs → export to s3 (for ex)
remote 2 → repo2 → auto_publish → distribute_2 → export rpm units to fs → export to s3 (for ex)
both these jobs perform the sync in the background, and the only to way to track the progress is to run
pulp task list --state running --feild pulp_href and wait in a loop,
if task_groups were available to end users to use, we could group tasks for a type of repo, say redhat 8 into its own group and then monitor the progress of the group until finished and then perform additional tasks…
again, this is just an idea what we wanted to try, but like you said there could be a more elegant solution
Thanks,