Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split up some responsabilities from the entrypoint to a service container. #551

Merged
merged 1 commit into from
Oct 9, 2023

Conversation

decko
Copy link
Member

@decko decko commented Sep 19, 2023

Closes #544

@decko decko marked this pull request as draft September 19, 2023 19:31
@decko decko force-pushed the new_services_container branch 2 times, most recently from 6b7b203 to ef88c6b Compare September 20, 2023 18:04
@decko decko force-pushed the new_services_container branch 2 times, most recently from c64e520 to d1bc2c3 Compare September 26, 2023 21:28
images/assets/pulp-content Outdated Show resolved Hide resolved
Comment on lines 98 to 103
migration_service:
condition: service_completed_successfully
signing_key_service:
condition: service_completed_successfully
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fear, this won't work with podman-compose, unless we keep the containers running there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it won't work with podman-compose for now. The idea is to enable docker-compose to use it and emulate the same behavior using the wait_for_* scripts.

RUN rm -f /usr/local/bin/pulp-content
# RUN rm -r /usr/local/bin/pulp-content
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is a hack, but we need it for older pulpcore.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm checking it, but seems I'm unable to revert it. 😭

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't confuse -f and -r as i did.

@decko decko force-pushed the new_services_container branch 2 times, most recently from a4e0c73 to 718ceb2 Compare September 27, 2023 22:52
@decko decko closed this Sep 27, 2023
@decko decko reopened this Sep 27, 2023
@decko decko force-pushed the new_services_container branch 4 times, most recently from 80ba629 to 874ee18 Compare October 2, 2023 16:44
@decko decko closed this Oct 2, 2023
@decko decko reopened this Oct 2, 2023
@decko decko force-pushed the new_services_container branch 2 times, most recently from fe981f1 to 0e97cd6 Compare October 2, 2023 18:43
@decko decko closed this Oct 2, 2023
@decko decko reopened this Oct 2, 2023
@decko decko closed this Oct 2, 2023
@decko decko reopened this Oct 2, 2023
@decko decko closed this Oct 3, 2023
@decko decko reopened this Oct 3, 2023
@decko decko force-pushed the new_services_container branch 5 times, most recently from e2d25b4 to 9c23921 Compare October 4, 2023 12:52
@decko decko force-pushed the new_services_container branch 2 times, most recently from 194dac4 to a41bd8f Compare October 4, 2023 13:51
@decko decko closed this Oct 4, 2023
@decko decko reopened this Oct 4, 2023
@decko decko marked this pull request as ready for review October 4, 2023 15:01
@decko decko requested review from mdellweg and dkliban October 4, 2023 15:01
images/compose/compose.yml Outdated Show resolved Hide resolved
images/assets/pulp-worker Outdated Show resolved Hide resolved
Comment on lines +98 to +99
set_init_password_service:
condition: service_completed_successfully
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a technical dependency. Pulp can run just fine without that password. I think we should drop these lines.

In case of a configured SSO solution, you can still use pulp without admin.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vote for leave it as it is now. Or at least leave it as comment.
It would make easier for new users to find how to change the password without using podman exec ....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not saying we should remove the service container. We just don't need it as a requirement on the api pod.
It will still start up anyway unless you call podman-compose in a special way.

images/assets/wait_on_database_migrations.sh Show resolved Hide resolved
@@ -271,14 +271,14 @@ jobs:
sed -i "s/pulp-web:latest/${{ matrix.app.web_image }}:${WEB_TAG}/g" $FILE
id | grep "(root)" || sudo usermod -G root $(whoami)
podman-compose -f $FILE up -d
sleep 30
podman exec compose_pulp_api_1 /usr/bin/wait_on_database_migrations.sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this what make it work with podman-compose? I'd love to hear the reasoning.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. Here the thing is to wait for all the migrations to be applied. My suspicious about the way it was working before it's because we didn't used the healthy checker, and within 30 seconds you have the exact amount of migrations needed to make the status endpoint work. It would take at least 3 or 4 more minutes to finish the migrations, which are now needed to change the status of the api and content containers to healthy.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 to 4 minutes? We need to talk about our upgrade strategy once more and how we can safely squash migrations.

@decko decko merged commit ccbb90e into pulp:latest Oct 9, 2023
15 checks passed
@decko decko deleted the new_services_container branch October 9, 2023 20:28

mkdir -p /var/lib/pulp/media \
/var/lib/pulp/assets \
/var/lib/pulp/tmp

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@decko @mdellweg It looks like the /var/lib/pulp/tmp directory is no longer being pre-created, which causes galaxy worker containers to CrashLoopBackOff. Is this a problem for Pulp as well? Should the fix be here, or in the Dockerfile?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/var/lib/pulp is mounted as a volume. This must happen at runtime. I wonder if the services using it are creating these dirs on demand? How does it even work for pulp these days @decko ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have it created in the core base image.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/var/lib/pulp is mounted as a volume. This must happen at runtime. I wonder if the services using it are creating these dirs on demand? How does it even work for pulp these days @decko ?

Ok, so it was being re-created after the /var/lib/pulp being mounted.
Should it have it's mount point?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our compose file mounts a volume on /var/lib/pulp. So i suspect one of the setup processes is probably recreating these dirs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our compose file mounts a volume on /var/lib/pulp. So i suspect one of the setup processes is probably recreating these dirs?

Yes, we mount, but we're not recreating those directories during the runtime. Since this is a tmp dir, would make send for pulp itself create it if it's not found?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The entrypoints for services are not compatible with kubernetes/compose
4 participants