-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: automatically create Kubernetes Services for apps using a httpserver component #14
Conversation
Dependency ReviewThe following issues were found:
OpenSSF ScorecardScorecard details
Scanned Manifest FilesCargo.lock
|
1d1d77b
to
c60a53e
Compare
@@ -501,7 +501,7 @@ pub async fn list_apps( | |||
Ok(models) | |||
} | |||
|
|||
async fn get_client( | |||
pub async fn get_client( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps in a future refactor this could be re-homed outside of the application resource file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't quite want to introduce a utils
module or crate, but it was tempting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing majorly blocking, but answered some comments and left some nits. Can come back and rereview
@@ -835,3 +913,19 @@ pub async fn run(state: State) -> anyhow::Result<()> { | |||
.await; | |||
Ok(()) | |||
} | |||
|
|||
pub(crate) fn common_labels() -> BTreeMap<String, String> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Consider using Lazy
from the once_cell
crate so this can be a static
nats_client, | ||
lattice_id: lattice_id.clone(), | ||
consumer, | ||
shutdown: CancellationToken::new(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can probably avoid the tokio-util dep and just use a oneshot channel from normal tokio (can also abort with a JoinHandle
if you want to keep that on the struct, but that will probably be messier here)
tx, | ||
}; | ||
|
||
// TODO is there a better way to handle this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It honestly might just be easier to call this function run
instead of new
and have it just return a new type called something like WatcherHandle
that contains the oneshot sender to use for dropping. Then you can just move this struct without returning it. Then you can just pass the rx
from oneshot and move the Watcher
struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That won't work without a pretty serious refactor, since the ServiceWatcher
relies on some methods and fields on the Watcher
struct that would need to be duplicated across both the Watcher
and WatcherHandles
. Might just be a symptom of bad design, but cloning the underlying Watcher
seems like the least bad option at the moment
/// It will return early if a [Watcher] already exists for the lattice. | ||
pub async fn watch(&self, client: Client, namespace: String, lattice_id: String) -> Result<()> { | ||
// If we're already watching this lattice then return early | ||
// TODO is there an easy way to do this with a read lock? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really. You'd have to read lock, do the contains
and then release the read lock and grab a write lock to insert it. I will note that right now it is a decently long time to hold a write lock, but not enough that I would consider it blocking
|
||
/// Finds the address for a target in a manifest | ||
fn find_address(manifest: &Manifest, target: &str) -> Option<String> { | ||
for component in manifest.spec.components.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lolsob. I mean...this could also just use stuff like map
, and_then
, and other things, but this works
…erver component One advantage of wasmCloud applications is that we always know what components make up an application and therefore can intelligently make decisions based on what interfaces we're using. This makes it much easier for software like the operator to take particular actions based on the contents of an application manifest. This change adds an additional set of reconciliations we can perform on a per-lattice basis to automatically create Kubernetes Services for applications that deploy a httpserver component. The operator uses a NATS consumer that watches for manifest deploy and undeploy events and triggers a reconciliation on a managed Service. Right now this is restricted only to `daemonscalers`, since we do not have to do any bookeeping on where components are deployed if they are running on all of the hosts in a lattice or in a subset identified by label. We will add support for `spreadscalers` in a future PR. This allows for some interesting deployment scenarios, such as wasmCloud applications that span multiple Kubernetes deployments in the same namespace, or potentially in _multiple namespaces_ if we decide to implement support for them. Initially the operator is only creating endpoints that route traffic from a service to pods in the same namespace, but we may add an option to relax that assumption. Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
Also add support for setting the number of stream replicas when creating the stream mirror for the operator. Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
Feature or Problem
One advantage of wasmCloud applications is that we always know what components make up an application and therefore can intelligently make decisions based on what interfaces we're using. This makes it much easier for software like the operator to take particular actions based on the contents of an application manifest.
This change adds an additional set of reconciliations we can perform on a per-lattice basis to automatically create Kubernetes Services for applications that deploy a httpserver component. The operator uses a NATS consumer that watches for manifest deploy and undeploy events and triggers a reconciliation on a managed Service. Right now this is restricted only to
daemonscalers
, since we do not have to do any bookeeping on where components are deployed if they are running on all of the hosts in a lattice or in a subset identified by label. We will add support forspreadscalers
in a future PR.This allows for some interesting deployment scenarios, such as wasmCloud applications that span multiple Kubernetes deployments in the same namespace, or potentially in multiple namespaces if we decide to implement support for them. Initially the operator is only creating endpoints that route traffic from a service to pods in the same namespace, but we may add an option to relax that assumption.
Related Issues
Release Information
0.2
Consumer Impact
Testing
Unit Test(s)
Acceptance or Integration
Manual Verification