-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup running containers on the Control-C signal #422
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -25,7 +25,7 @@ use itertools::Itertools; | |
use tokio::io::AsyncWriteExt; | ||
use tokio::sync::mpsc::UnboundedReceiver; | ||
use tokio::sync::RwLock; | ||
use tracing::trace; | ||
use tracing::{debug, error, trace}; | ||
use uuid::Uuid; | ||
|
||
use crate::db::models as dbmodels; | ||
|
@@ -95,6 +95,7 @@ impl EndpointScheduler { | |
log_dir: self.log_dir.clone(), | ||
bar, | ||
endpoint, | ||
container_id: None, | ||
max_endpoint_name_length: self.max_endpoint_name_length, | ||
job, | ||
staging_store: self.staging_store.clone(), | ||
|
@@ -136,9 +137,11 @@ impl EndpointScheduler { | |
} | ||
} | ||
|
||
#[derive(Clone)] | ||
pub struct JobHandle { | ||
log_dir: Option<PathBuf>, | ||
endpoint: EndpointHandle, | ||
container_id: Option<String>, | ||
max_endpoint_name_length: usize, | ||
job: RunnableJob, | ||
bar: ProgressBar, | ||
|
@@ -155,7 +158,7 @@ impl std::fmt::Debug for JobHandle { | |
} | ||
|
||
impl JobHandle { | ||
pub async fn run(self) -> Result<Result<Vec<ArtifactPath>>> { | ||
pub async fn run(mut self) -> Result<Result<Vec<ArtifactPath>>> { | ||
let (log_sender, log_receiver) = tokio::sync::mpsc::unbounded_channel::<LogItem>(); | ||
let endpoint_uri = self.endpoint.uri().clone(); | ||
let endpoint_name = self.endpoint.name().clone(); | ||
|
@@ -181,6 +184,7 @@ impl JobHandle { | |
) | ||
.await?; | ||
let container_id = prepared_container.create_info().id.clone(); | ||
self.container_id = Some(container_id.clone()); | ||
let running_container = prepared_container | ||
.start() | ||
.await | ||
|
@@ -202,12 +206,12 @@ impl JobHandle { | |
package_name: &package.name, | ||
package_version: &package.version, | ||
log_dir: self.log_dir.as_ref(), | ||
job: self.job, | ||
job: self.job.clone(), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. TODO/question: Why do we now need to clone here? (The whole cloning is irritating me a bit in general and can be quite dangerous if the data can diverge) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, I will try something else to avoid this clone. We shouldn't really need it here |
||
log_receiver, | ||
bar: self.bar.clone(), | ||
} | ||
.join(); | ||
drop(self.bar); | ||
drop(self.bar.clone()); | ||
Comment on lines
-210
to
+214
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Huh, we're cloning and immediately dropping the clone? Shouldn't this be a no-op? I'm a bit surprised that Clippy doesn't catch this. (By only looking at this context I'm surprised why there is a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah. https://docs.rs/indicatif/0.17.8/indicatif/struct.ProgressBar.html
But I also don't like it, and I'm open to recommendations There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This does indeed generate a compiler error:
I'll have to look at it later. That This error probably comes from the fact that There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, so this error is the result for implementing the I think we can simply drop this |
||
|
||
let (run_container, logres) = tokio::join!(running_container, logres); | ||
let log = | ||
|
@@ -370,6 +374,36 @@ impl JobHandle { | |
} | ||
} | ||
|
||
impl Drop for JobHandle { | ||
fn drop(&mut self) { | ||
debug!("Cleaning up JobHandle"); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit: Might be nice/useful to include the job ID (but debugging is obviously optional). |
||
if self.container_id.is_some() { | ||
debug!("Container was already started"); | ||
let docker = self.endpoint.docker().clone(); | ||
let container_id = self.container_id.take().unwrap(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This isn't pretty and not guaranteed to be safe - please use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried that. It didn't work out because of the lifetimes There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That most likely was due to something else then. I can give it a look after my vacation but I don't see why that should create (unsolvable) issues. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You can use the following patch: --- a/src/endpoint/scheduler.rs
+++ b/src/endpoint/scheduler.rs
@@ -377,10 +377,9 @@ impl JobHandle {
impl Drop for JobHandle {
fn drop(&mut self) {
debug!("Cleaning up JobHandle");
- if self.container_id.is_some() {
+ if let Some(container_id) = self.container_id.clone() {
debug!("Container was already started");
let docker = self.endpoint.docker().clone();
- let container_id = self.container_id.take().unwrap();
tokio::spawn(async move {
let container = docker.containers().get(&container_id); |
||
|
||
tokio::spawn(async move { | ||
let container = docker.containers().get(&container_id); | ||
let container_info = container.inspect().await.unwrap(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should avoid the unwrap here - I'd probably just log the error (in theory we might occasionally run into such errors when the containers terminate between the if and this inspect). PS: We want to avoid There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fe2O3.unwrap().unwrap().unwrap().unwrap() |
||
|
||
if container_info.state.running { | ||
debug!("Container is still running, cleaning up..."); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit: Including the container ID would be nice. |
||
match container.kill(None).await { | ||
Ok(_) => debug!("Stopped container with id: {}", container_id), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The text doesn't match the action - there are both There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was stopping the containers before, forgot to update the debug message. Thanks There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In that case it would probably also be nice to document why we're killing the containers instead of stopping them. I was already wondering about that - unfortunately the Docker API documentation didn't state the differences. I assume that killing is a more reliable way but I didn't take a deeper look. |
||
Err(e) => { | ||
error!("Failed to stop container with id: {}\n{}", container_id, e) | ||
} | ||
} | ||
} else { | ||
debug!("Container has already finished"); | ||
} | ||
}); | ||
} else { | ||
debug!("No container created"); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds weird since we aren't creating containers here. I'd be more explicit, e.g.: "No container was created for the JobHandle yet -> skipping cleanup". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds good. Is the approach getting the state of the container in drop implementation ok? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would set the |
||
} | ||
} | ||
} | ||
|
||
struct LogReceiver<'a> { | ||
endpoint_name: &'a str, | ||
max_endpoint_name_length: &'a usize, | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,6 +13,7 @@ | |
use std::borrow::Borrow; | ||
use std::collections::HashMap; | ||
use std::path::PathBuf; | ||
use std::process::ExitCode; | ||
use std::sync::Arc; | ||
use std::sync::Mutex; | ||
|
||
|
@@ -32,8 +33,9 @@ use tokio::sync::mpsc::Receiver; | |
use tokio::sync::mpsc::Sender; | ||
use tokio::sync::RwLock; | ||
use tokio_stream::StreamExt; | ||
use tokio_util::sync::CancellationToken; | ||
use tracing::Instrument; | ||
use tracing::{debug, error, trace}; | ||
use tracing::{debug, error, info, trace}; | ||
use typed_builder::TypedBuilder; | ||
use uuid::Uuid; | ||
|
||
|
@@ -265,12 +267,26 @@ impl Borrow<ArtifactPath> for ProducedArtifact { | |
|
||
impl<'a> Orchestrator<'a> { | ||
pub async fn run(self, output: &mut Vec<ArtifactPath>) -> Result<HashMap<Uuid, Error>> { | ||
let (results, errors) = self.run_tree().await?; | ||
let token = CancellationToken::new(); | ||
ammernico marked this conversation as resolved.
Show resolved
Hide resolved
|
||
let cloned_token = token.clone(); | ||
ammernico marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
tokio::spawn(async move { | ||
info!("Received the ctl-c signal, stopping..."); | ||
ammernico marked this conversation as resolved.
Show resolved
Hide resolved
|
||
tokio::signal::ctrl_c().await.unwrap(); | ||
token.cancel(); | ||
ExitCode::from(1) | ||
}); | ||
|
||
let (results, errors) = self.run_tree(cloned_token).await?; | ||
|
||
output.extend(results); | ||
Ok(errors) | ||
} | ||
|
||
async fn run_tree(self) -> Result<(Vec<ArtifactPath>, HashMap<Uuid, Error>)> { | ||
async fn run_tree( | ||
self, | ||
token: CancellationToken, | ||
) -> Result<(Vec<ArtifactPath>, HashMap<Uuid, Error>)> { | ||
let prepare_span = tracing::debug_span!("run tree preparation"); | ||
|
||
// There is no async code until we drop this guard, so this is fine | ||
|
@@ -452,45 +468,55 @@ impl<'a> Orchestrator<'a> { | |
// The JobTask::run implementation handles the rest, we just have to wait for all futures | ||
// to succeed. | ||
let run_span = tracing::debug_span!("run"); | ||
let running_jobs = jobs | ||
.into_iter() | ||
.map(|prep| { | ||
trace!(parent: &run_span, job_uuid = %prep.1.jobdef.job.uuid(), "Creating JobTask"); | ||
// the sender is set or we need to use the root sender | ||
let sender = prep | ||
.3 | ||
.into_inner() | ||
.unwrap_or_else(|| vec![root_sender.clone()]); | ||
JobTask::new(prep.0, prep.1, sender) | ||
}) | ||
.inspect( | ||
|task| trace!(parent: &run_span, job_uuid = %task.jobdef.job.uuid(), "Running job"), | ||
) | ||
.map(|task| { | ||
task.run() | ||
.instrument(tracing::debug_span!(parent: &run_span, "JobTask::run")) | ||
}) | ||
.collect::<futures::stream::FuturesUnordered<_>>(); | ||
debug!("Built {} jobs", running_jobs.len()); | ||
|
||
running_jobs | ||
.collect::<Result<()>>() | ||
.instrument(run_span.clone()) | ||
.await?; | ||
trace!(parent: &run_span, "All jobs finished"); | ||
drop(run_span); | ||
|
||
match root_receiver.recv().await { | ||
None => Err(anyhow!("No result received...")), | ||
Some(Ok(results)) => { | ||
let results = results | ||
.into_iter() | ||
.flat_map(|tpl| tpl.1.into_iter()) | ||
.map(ProducedArtifact::unpack) | ||
.collect(); | ||
Ok((results, HashMap::with_capacity(0))) | ||
|
||
tokio::select! { | ||
_ = token.cancelled() => { | ||
anyhow::bail!("Received Control-C signal"); | ||
} | ||
r = async { | ||
let running_jobs = jobs | ||
.into_iter() | ||
.map(|prep| { | ||
trace!(parent: &run_span, job_uuid = %prep.1.jobdef.job.uuid(), "Creating JobTask"); | ||
// the sender is set or we need to use the root sender | ||
let sender = prep | ||
.3 | ||
.into_inner() | ||
.unwrap_or_else(|| vec![root_sender.clone()]); | ||
JobTask::new(prep.0, prep.1, sender) | ||
}) | ||
.inspect( | ||
|task| trace!(parent: &run_span, job_uuid = %task.jobdef.job.uuid(), "Running job"), | ||
) | ||
.map(|task| { | ||
task.run() | ||
.instrument(tracing::debug_span!(parent: &run_span, "JobTask::run")) | ||
}) | ||
.collect::<futures::stream::FuturesUnordered<_>>(); | ||
debug!("Built {} jobs", running_jobs.len()); | ||
|
||
running_jobs | ||
.collect::<Result<()>>() | ||
.instrument(run_span.clone()) | ||
.await?; | ||
trace!(parent: &run_span, "All jobs finished"); | ||
drop(run_span); | ||
|
||
match root_receiver.recv().await { | ||
None => Err(anyhow!("No result received...")), | ||
Some(Ok(results)) => { | ||
let results = results | ||
.into_iter() | ||
.flat_map(|tpl| tpl.1.into_iter()) | ||
.map(ProducedArtifact::unpack) | ||
.collect(); | ||
Ok((results, HashMap::with_capacity(0))) | ||
} | ||
Some(Err(errors)) => Ok((vec![], errors)), | ||
} | ||
} => { | ||
r | ||
} | ||
Some(Err(errors)) => Ok((vec![], errors)), | ||
Comment on lines
-455
to
-493
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Note to self: I still need to properly review this part. |
||
} | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach seems fine but we never set
container_id
back toNone
for finished jobs - it would be more elegant if we could do so (after ensuring that the container has indeed exited but that might already be implemented to check if the job has finished).