-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split up some responsabilities from the entrypoint to a service container. #551
Conversation
6b7b203
to
ef88c6b
Compare
c64e520
to
d1bc2c3
Compare
images/compose/compose.yml
Outdated
migration_service: | ||
condition: service_completed_successfully | ||
signing_key_service: | ||
condition: service_completed_successfully |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fear, this won't work with podman-compose, unless we keep the containers running there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it won't work with podman-compose for now. The idea is to enable docker-compose to use it and emulate the same behavior using the wait_for_
* scripts.
RUN rm -f /usr/local/bin/pulp-content | ||
# RUN rm -r /usr/local/bin/pulp-content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is a hack, but we need it for older pulpcore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm checking it, but seems I'm unable to revert it. 😭
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't confuse -f
and -r
as i did.
a4e0c73
to
718ceb2
Compare
80ba629
to
874ee18
Compare
fe981f1
to
0e97cd6
Compare
0e97cd6
to
d14ba15
Compare
e2d25b4
to
9c23921
Compare
194dac4
to
a41bd8f
Compare
a41bd8f
to
a2a8a1c
Compare
a2a8a1c
to
33352c0
Compare
set_init_password_service: | ||
condition: service_completed_successfully |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a technical dependency. Pulp can run just fine without that password. I think we should drop these lines.
In case of a configured SSO solution, you can still use pulp without admin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I vote for leave it as it is now. Or at least leave it as comment.
It would make easier for new users to find how to change the password without using podman exec ...
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not saying we should remove the service container. We just don't need it as a requirement on the api pod.
It will still start up anyway unless you call podman-compose in a special way.
@@ -271,14 +271,14 @@ jobs: | |||
sed -i "s/pulp-web:latest/${{ matrix.app.web_image }}:${WEB_TAG}/g" $FILE | |||
id | grep "(root)" || sudo usermod -G root $(whoami) | |||
podman-compose -f $FILE up -d | |||
sleep 30 | |||
podman exec compose_pulp_api_1 /usr/bin/wait_on_database_migrations.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this what make it work with podman-compose? I'd love to hear the reasoning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope. Here the thing is to wait for all the migrations to be applied. My suspicious about the way it was working before it's because we didn't used the healthy
checker, and within 30 seconds you have the exact amount of migrations needed to make the status
endpoint work. It would take at least 3 or 4 more minutes to finish the migrations, which are now needed to change the status of the api
and content
containers to healthy
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 to 4 minutes? We need to talk about our upgrade strategy once more and how we can safely squash migrations.
33352c0
to
ee0d8be
Compare
|
||
mkdir -p /var/lib/pulp/media \ | ||
/var/lib/pulp/assets \ | ||
/var/lib/pulp/tmp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/var/lib/pulp is mounted as a volume. This must happen at runtime. I wonder if the services using it are creating these dirs on demand? How does it even work for pulp these days @decko ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have it created in the core base image.
/var/lib/pulp/tmp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/var/lib/pulp is mounted as a volume. This must happen at runtime. I wonder if the services using it are creating these dirs on demand? How does it even work for pulp these days @decko ?
Ok, so it was being re-created after the /var/lib/pulp being mounted.
Should it have it's mount point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our compose file mounts a volume on /var/lib/pulp. So i suspect one of the setup processes is probably recreating these dirs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our compose file mounts a volume on /var/lib/pulp. So i suspect one of the setup processes is probably recreating these dirs?
Yes, we mount, but we're not recreating those directories during the runtime. Since this is a tmp dir, would make send for pulp itself create it if it's not found?
Closes #544