-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup running containers on the Control-C signal #422
base: master
Are you sure you want to change the base?
Conversation
Tested while multiple containers were running on all build hosts. |
54b622f
to
770ffb1
Compare
770ffb1
to
284f396
Compare
- Add the `signal` feature to `tokio` to interrupt and handle the Control-C signal in Butido. - Add Control-C signal handling into the `Orchestrator`. - Implement `Drop` on the `JobHandle` to ensure container cleanup. Fixes science-computing#409 Signed-off-by: Nico Steinle <nico.steinle@eviden.com>
284f396
to
752d2fc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't done any testing yet but already found some potential issues in the code.
drop(self.bar); | ||
drop(self.bar.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, we're cloning and immediately dropping the clone? Shouldn't this be a no-op? I'm a bit surprised that Clippy doesn't catch this.
(By only looking at this context I'm surprised why there is a self.bar
related change at all...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah.
This one also gave me the chills. It is intentional. Clippy was/is kinda the one who recommended this.
The clone is to satisfy the borrow checker, but it should be fine here because:
https://docs.rs/indicatif/0.17.8/indicatif/struct.ProgressBar.html
The progress bar is an Arc around its internal state. When the progress bar is cloned it just increments the refcount (so the original and its clone share the same state).
But I also don't like it, and I'm open to recommendations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does indeed generate a compiler error:
$ cargo check -r
Checking butido v0.5.0 (/home/michael/butido-clean)
error[E0509]: cannot move out of type `JobHandle`, which implements the `Drop` trait
--> src/endpoint/scheduler.rs:214:14
|
214 | drop(self.bar);
| ^^^^^^^^
| |
| cannot move out of here
| move occurs because `self.bar` has type `indicatif::ProgressBar`, which does not implement the `Copy` trait
|
help: consider cloning the value if the performance cost is acceptable
|
214 | drop(self.bar.clone());
| ++++++++
For more information about this error, try `rustc --explain E0509`.
error: could not compile `butido` (bin "butido") due to 1 previous error
I'll have to look at it later. That clone()
definitely isn't the right solution though - you're creating an additional reference and then drop the additional reference (just a waste of time with no effect - the intention of this code was to drop the last reference).
This error probably comes from the fact that self
is mutable now but I'm surprised that this drop ever worked before (I guess the compiler is quite smart in that case).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so this error is the result for implementing the Drop
trait for JobHandle
and https://doc.rust-lang.org/error_codes/E0509.html nicely explains it.
I think we can simply drop this drop()
statement for now. It was added in 1b792a0 without an explicit explanation and I think it's just there to make the code cleaner by avoiding accidental access to the progressbar after passing it to the LogReceiver
. The LogReceiver
calls self.bar.finish_with_message()
at the end of join()
so it shouldn't cause issues if there's still a reference around (it might continue to consume some resources but it should be "locked" after that). Would be great if you could test and confirm that theory though.
@@ -370,6 +374,36 @@ impl JobHandle { | |||
} | |||
} | |||
|
|||
impl Drop for JobHandle { | |||
fn drop(&mut self) { | |||
debug!("Cleaning up JobHandle"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Might be nice/useful to include the job ID (but debugging is obviously optional).
if self.container_id.is_some() { | ||
debug!("Container was already started"); | ||
let docker = self.endpoint.docker().clone(); | ||
let container_id = self.container_id.take().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't pretty and not guaranteed to be safe - please use if let Some(container_id) = self.container_id
or something similar instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried that. It didn't work out because of the lifetimes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That most likely was due to something else then. I can give it a look after my vacation but I don't see why that should create (unsolvable) issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use the following patch:
--- a/src/endpoint/scheduler.rs
+++ b/src/endpoint/scheduler.rs
@@ -377,10 +377,9 @@ impl JobHandle {
impl Drop for JobHandle {
fn drop(&mut self) {
debug!("Cleaning up JobHandle");
- if self.container_id.is_some() {
+ if let Some(container_id) = self.container_id.clone() {
debug!("Container was already started");
let docker = self.endpoint.docker().clone();
- let container_id = self.container_id.take().unwrap();
tokio::spawn(async move {
let container = docker.containers().get(&container_id);
let container_info = container.inspect().await.unwrap(); | ||
|
||
if container_info.state.running { | ||
debug!("Container is still running, cleaning up..."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Including the container ID would be nice.
|
||
tokio::spawn(async move { | ||
let container = docker.containers().get(&container_id); | ||
let container_info = container.inspect().await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should avoid the unwrap here - I'd probably just log the error (in theory we might occasionally run into such errors when the containers terminate between the if and this inspect).
PS: We want to avoid unwrap()
as much as possible in general (but there are of course exceptions where it's fine).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fe2O3.unwrap().unwrap().unwrap().unwrap()
let running_jobs = jobs | ||
.into_iter() | ||
.map(|prep| { | ||
trace!(parent: &run_span, job_uuid = %prep.1.jobdef.job.uuid(), "Creating JobTask"); | ||
// the sender is set or we need to use the root sender | ||
let sender = prep | ||
.3 | ||
.into_inner() | ||
.unwrap_or_else(|| vec![root_sender.clone()]); | ||
JobTask::new(prep.0, prep.1, sender) | ||
}) | ||
.inspect( | ||
|task| trace!(parent: &run_span, job_uuid = %task.jobdef.job.uuid(), "Running job"), | ||
) | ||
.map(|task| { | ||
task.run() | ||
.instrument(tracing::debug_span!(parent: &run_span, "JobTask::run")) | ||
}) | ||
.collect::<futures::stream::FuturesUnordered<_>>(); | ||
debug!("Built {} jobs", running_jobs.len()); | ||
|
||
running_jobs | ||
.collect::<Result<()>>() | ||
.instrument(run_span.clone()) | ||
.await?; | ||
trace!(parent: &run_span, "All jobs finished"); | ||
drop(run_span); | ||
|
||
match root_receiver.recv().await { | ||
None => Err(anyhow!("No result received...")), | ||
Some(Ok(results)) => { | ||
let results = results | ||
.into_iter() | ||
.flat_map(|tpl| tpl.1.into_iter()) | ||
.map(ProducedArtifact::unpack) | ||
.collect(); | ||
Ok((results, HashMap::with_capacity(0))) | ||
|
||
tokio::select! { | ||
_ = token.cancelled() => { | ||
anyhow::bail!("Received Control-C signal"); | ||
} | ||
r = async { | ||
let running_jobs = jobs | ||
.into_iter() | ||
.map(|prep| { | ||
trace!(parent: &run_span, job_uuid = %prep.1.jobdef.job.uuid(), "Creating JobTask"); | ||
// the sender is set or we need to use the root sender | ||
let sender = prep | ||
.3 | ||
.into_inner() | ||
.unwrap_or_else(|| vec![root_sender.clone()]); | ||
JobTask::new(prep.0, prep.1, sender) | ||
}) | ||
.inspect( | ||
|task| trace!(parent: &run_span, job_uuid = %task.jobdef.job.uuid(), "Running job"), | ||
) | ||
.map(|task| { | ||
task.run() | ||
.instrument(tracing::debug_span!(parent: &run_span, "JobTask::run")) | ||
}) | ||
.collect::<futures::stream::FuturesUnordered<_>>(); | ||
debug!("Built {} jobs", running_jobs.len()); | ||
|
||
running_jobs | ||
.collect::<Result<()>>() | ||
.instrument(run_span.clone()) | ||
.await?; | ||
trace!(parent: &run_span, "All jobs finished"); | ||
drop(run_span); | ||
|
||
match root_receiver.recv().await { | ||
None => Err(anyhow!("No result received...")), | ||
Some(Ok(results)) => { | ||
let results = results | ||
.into_iter() | ||
.flat_map(|tpl| tpl.1.into_iter()) | ||
.map(ProducedArtifact::unpack) | ||
.collect(); | ||
Ok((results, HashMap::with_capacity(0))) | ||
} | ||
Some(Err(errors)) => Ok((vec![], errors)), | ||
} | ||
} => { | ||
r | ||
} | ||
Some(Err(errors)) => Ok((vec![], errors)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: I still need to properly review this part.
pub struct JobHandle { | ||
log_dir: Option<PathBuf>, | ||
endpoint: EndpointHandle, | ||
container_id: Option<String>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach seems fine but we never set container_id
back to None
for finished jobs - it would be more elegant if we could do so (after ensuring that the container has indeed exited but that might already be implemented to check if the job has finished).
@@ -202,12 +206,12 @@ impl JobHandle { | |||
package_name: &package.name, | |||
package_version: &package.version, | |||
log_dir: self.log_dir.as_ref(), | |||
job: self.job, | |||
job: self.job.clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO/question: Why do we now need to clone here? (The whole cloning is irritating me a bit in general and can be quite dangerous if the data can diverge)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I will try something else to avoid this clone. We shouldn't really need it here
signal
feature totokio
to interrupt and handle the Control-C signal in Butido.Orchestrator
.Drop
on theJobHandle
to ensure container cleanup.This is a working draft pr for testing purposes and still missing some features.