Dec 10 Agenda
- 3.15 RPMs support landed
- I did code fixes / workarounds for all the upgrades issue.
- 3.16 RPMs support
- Running into another upgrade issue from 3.3
- Decided to actually drop support for upgrading from pre-3.8 this time
- Need to announce on discourse.
- Cluster support
- Clarify what changes are still needed?
- Identical token auth keys
- galaxy_ng users probably need this, not just regular pulp_container users.
- Even if their QA hasn’t caught it.
- Molecule test for multiple containers
- Fixing any bugs along the way
- Prototyping with a load balancer or something
- Documentation updates.
- Work with someone from Ansible QE to test it out. With their HA environment.
- Focus before cluster support or usability will be molecule performance
1 Like
Jan 12 Agenda
- CI issue with pulp_rpm migrations not being idempotent
- CI issue with locale_gen module missing from ansible-core 2.12 from geerlingguy role
- Will drop support for EL7 pip mode while doing performance improvements over next week
Jan 19 Agenda
- Fixed 3 CI issues / actual bugs in pulp_installer, now encountering 2 more
- Force merged since it makes the CI overall less buggy, and ppicka’s PR will be better fixed with it.
- Noticed in Ansible docs that
ansible-galaxy collection install
can operate on a source directory. Builds and installs in 1 step.
- CI issue with pulp_rpm migrations not being idempotent
- This was fixed by pulp_rpm.
-
“pulp_repos” role improvement / bugfixes:
- Eliminates need for workaround in example-use playbook.
- Fixes actually adding EPEL on RHEL7 for the pulp_database/pulp_redis roles
- Assigned to ppicka.
- postgresql role issue
Jan 26 Agenda
- Is waiting for a long time to merge now https://github.com/pulp/pulp_installer/pull/382
- [mikedep333] I will re-review
- Finished triaging Issue #9211: Vagrant devel installs have SELinux errors - Pulp by linking to it in 2 GH issues
- Is this the appropriate way to migrate to GH issues?
- Is tox actually overall beneficial for us?
- It causes silly/laborious integration between CI and tox
- How often do we test locally with all the python versions?
- How often do we test locally with any different python version that then venv’s?
- It is preventing me from calling molecule steps individually in the CI (which helps identify performance regressions.)
- agreed: Remove tox
- Drop idempotence from the longest running test, source-dynamic?
- +1 for PRs, keep on nightly [fao]
- Design of https://github.com/pulp/pulp_installer/pull/830
- Agreed: Use the cache_enabled variable to determine whether or not the pulp_redis role actually does anything. Wrap the tasks in a when condition. No more need for a separate role.
Feb 4 Agenda
- What a week full of CI and installer breakages!
- CentOS 8 discontinued was kind of foreseeable and avoidable
- pip-tools being broken by pip 22 is a routine breakage
- pulp_devel idempotency breakage is a routine brekage
- release CI breakage (and nightly CI breakage) - not sure if we can test for this better. I just run yamllint.
- Rocky Linux 8 support?
- Ask on discourse
- There’s also alma - can be supported via generic check for “RedHat” family and major version 8.
- Release announcement for 3.17.2?
- agreed: Do an announcement
- GitHub generated release notes?
- Reviewed https://github.com/pulp/pulp_installer/pull/870
Feb 9 Agenda
- Created pulp-installer channel
- I found another way to do different variables per distro:
- Still need to announce vagrant environments changes
- pulp_repos design
- include_role with “public” option
- “disable all repos” single variable
- Ansible core main branch still broken in CI
Feb 16 Agenda
- We really do not need the “source-dynamic” test.
- Nobody uses this exact combination. (devel role + dynamic include)
- I cannot see why someone would need to use it.
- It’s realistically tested by release-dynamic and source-static
- Already done a day ago: Moved to nightly CI only
- “packages-dynamic” or “release-dynamic” should be moved to nightly CI
- release-dynamic always tests the latest version of pulpcore
- packages-dynamic is quicker
- Already moved a day ago
- Status of CI refactoring (& performance improvements)
- Made the 2 above changes, only on nightly CI now
- PR CI tests dropped from 11 to 9
- tox removed, python versions and everything now determined by GHA logic
- ansible-core tests merged into our other test matrix and GHA logic
- PR CI tests dropped from 9 to 7
- Added more recent versions of ansible to the test matrix
- We can see how long each molecule phase takes
- Performance still not satisfactory.
- Will look into dropping idempotency from the longest running test or 2.
- Will look into more general performance improvements
- Noticed that packages tests take like 10 fewer minutes than release tests
- I’m trying to use (with molecule) an ansible plugin that reports the longest running tasks.
- fao89: Try comparing only uses CentOS 7 and CentOS 8 in the release tests
- packages-static takes 8 more minutes than packages-dynamic?
- Dynamic should take longer, it runs pulp_common repeatedly.
- maybe ansible interpreter version? Python version?
- yes, 2.10 vs latest, 3.7 vs 3.9
- fao89: See variables
- differences are unix sockets and different webserver.
Feb 24 Agenda
- Remaining CI performance improvement: reduce the time to run source-static
- Proposal: Only run the devel role in nightly CI or for PRS when the devel role gets modified. For most PRs, just install from source.
- Sub-proposal: Don’t create a new test, just run
sed
to not include pulp_devel for the regular PR CI test
- Note: This still tests installing the latest, master branch, of pulp.
- pulp_repos & redis role: https://github.com/pulp/pulp_installer/pull/901
- Where does redis come from on EL7? EPEL7, for both centos and RHEL
- Wrong dependency on pulp_common
Mar 3 Agenda
-
Galaxy Signing Service
- Looks like it can be done in a few days to a week
- HA
- Need to refactor the dynamic molecule tests into multi-node tests
- Need some adjustments on keys (encryption, token auth, webserver, …)
Mar 10 Agenda
- onboarding new team members
- read pulp 3 docs, particularly architecture
- familiarity with task tracking
- Status of galaxy signing service
- going fairly well
- wrapping around gpg commands is awkward but doable
- this might help for some gpg commands: Ansible Galaxy
- PR for repos role usage
- merging of certs fix
Mar 16 Agenda
- Is it possible to have 2 dynaconf settings files, one generated by installer, and a user-override?
- Status of AH signing service
- 1 remaining bug affecting release installs on EL7 only
Mar 23 Agenda
- Most of the way done on making db encryption keys the same
- Use of run_once / register / debug module to desgignate the primary host
- How to handle non-identical keys already on the cluster?
- Worker nodes need the key too, right?
- Working partial implementation of a cluster for release-dynamic
- Resolved multiple accidental dependencies of pulp_common on pulp_database
- Need to do a release still for the AH signing service
- SELinux updates just merged
- galaxy-importer support in SELinux
- Need test steps
- agreed: reach out to AH
- settings.local.py
- agreed: Put header at the top of settings.py saying to modify settings.local.py instead
-
Settings pulp_user_home should set the entirety of /var/lib/pulp
- There is a mismatch between certain variables and certain sub-variables of pulp_settings
- We should look for ways to merge these variables into 1.
- ppicka will address if he has time
Mar 30 Agenda
- Inconsistency in default config for postgres/redis
- Currently:
- postgres binds to 0.0.0.0
- postgres only permits connections from 127.0.0.1
- redis binds to 127.0.0.1
- What should we default to?
- Accept connections / bind to 0.0.0.0
- Can we configure postgres to allow the other hosts by IP?
- we cannot guess correctly enough which is the correct IP address
- Refactoring pulp_webserver to use the __pulp_database_config_real_sole_host instead of installing pulp-common
- Current status of fixing non-identical database fields key
- Will not do anything about non-identical existing keys but to throw a proper error message in the installer
- Lots of effort involved in picking the correct host to run pulp_database_config.
Apr 6 Agenda
- Welcome Humberto!
- Figured out how to set the most global of variables
- access once set with hostvars[‘localhost’][‘var_name’]
- as opposed to the normal way “var_name” which can be also done as: hostvars[inventory_hostname][‘var_name’]
- set with “set_fact:” and “delegate_facts: True” “delegate_to: localhost” “run_once: True”
Apr 20 Agenda
- Status of database fields key PR
- Satoe messaged me privately with an error, which leads to a design question.
- Should we continue to wait for the database fields key PR to release pulp_installer 3.19.0?
- agreed: Release it beforehand if someone complains.
- 3.18 RPMs
Apr 27 Agenda
- Updated the db fields encryption keys PR to support replacing other hosts’ keys with 1 host’s key
May 9 Agenda
- Status of 2 big cluster support PRs:
- Cluster CI
- pulp_webserver independence
- dependent on the epel7 PR
- Status of el7 support in packages?
- el7 packages will still be built for pulpcore 3.18 RPMs.
- docs examples not showing up properly Customizing Your Pulp Deployment - Pulp Installer
- Suggestions on renaming / moving: Customizing Your Pulp Deployment - Pulp Installer
- How about move to a new page called “cluster examples”?
- Not technically accurate because 1 example is an external postgres/redis, but they could be postgres/redis clusters.
- Suggestions on renaming / moving: Customizing Your Pulp Deployment - Pulp Installer
- How about “specifying plugin versions” or “Installing specific plugin versions”
- Running into an issue with my easy-approach-to-settings
- Desire: settings like content_origin get set to “the 1 host that will run pulp-webserver”
- Problem example: content_origin needs to be set for the pulp-api host, but pulp webserver gets deployed afterwards. I cannot determine “the 1 host that will run pulp-webserver”, only “the 1 host that has already run pulp-webserver”.
- Possible solution: Special group names like pulp_webservers? A host could be in multiple groups. Users would still need to apply the correct roles list to each host.
- agreed: follow up with pavel
May 11 Agenda
- Remaining CentOS 9 work
- vagrant box
- upgrade images
- Additional complexity in implementing webserver support for multiple api/content hosts
- This is basically load balancing
- Load balancing parameters per-host
- Global load balancing parameters
- Proposed design
- pulp_webserver_api_balancing_params:
foo: bar
foo2: bar2
- pulp_webserver_api_servers:
- url: pulp-api-1:24817
parameters:
foo: bar
foo2: bar2
- url: pulp-api-2:24817
parameters:
foo: bar
foo2: bar2
- pulp_webserver_content_balancing_params:
foo: bar
foo2: bar2
- pulp_webserver_content_servers:
- url: pulp-content-1:24816
parameters:
foo: bar
foo2: bar2
- url: pulp-content-2:24816
parameters:
foo: bar
foo2: bar2
- Lets triage open issues
May 17 Agenda
- Mike’s desire to make vagrant installs no longer build & install the collection
- https://github.com/pulp/pulp_installer/pull/1099
- This conflicts with molecule, which does not build and install the collection. But the installed collection takes precedence over the local repo.
- I have repeatedly run molecule commands, 2 only to have to repeat them a 3rd time after deleting the collection.
- Making this change would require all vagrant users to run
rm -rf ~/.ansible/collections/ansible_collections/pulp/
- agreed: Make this change, and communicate it well. Devs often use vagrant envs for months.