-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
155 lines (155 loc) · 31.7 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
[
{
"uri": "https://keleustes.github.io/",
"title": "Keleustes",
"tags": [],
"description": "Main page",
"content": " Keleustes Rationale The projects listed here are POC intended to help make educated decisions for the implementation of OpenInfrastructure Airship project. Interesting links regarding Configuration Management and Kubernetes: The State of Kubernetes Configuration Management Declarative application management in Kubernetes None of thoses POC aimed at replacing for airship components. Those POC merely aim to highlight the potential advantages and pitfalls of choosing one technology versus another.\n Airship Design Airship Governance OpenDiscussion StoryBoard Airship Specs Inflight Specs Airship Related Links airship treasuremap pegleg spyglass shipyard armada deckhand drydock promenade divingbell maas Openstack Related Links openstack openstack-helm openstack-helm-infra "
},
{
"uri": "https://keleustes.github.io/post/",
"title": "Latest News",
"tags": [],
"description": "",
"content": " Posts Convertion to new HUGO is still WIP\n"
},
{
"uri": "https://keleustes.github.io/armada-operator/children/crds/",
"title": "CRDs",
"tags": ["wiki"],
"description": "",
"content": " CRDs ArmadaChart CRD The ArmadaChart defintion used in production is available here: Production\nThe CRD ArmadaChart definition is available here:\n Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml ArmadaChartGroup CRD The ArmadaChartGroup defintion used in production is available here: Production\nThe CRD ArmadaChartGroup definition is available here:\n Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml ArmadaManifest CRD The ArmadaManifest defintion used in production is available here: Production\nThe CRD ArmadaManifest definition is available here:\n Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml Usage The file passed to the \u0026lsquo;armada cli` is available here: Armada The file passed to the kubectl apply is available here: kubectl "
},
{
"uri": "https://keleustes.github.io/oslc-operator/children/lifecycle/",
"title": "LifeCycle",
"tags": ["wiki"],
"description": "",
"content": " Openstack Service Lifecycle Schema Rationale Some transitions from one phase/stage to the other are autonomous: Start to test if install successful) xxx Some transitions from one phase/stage to the other but be triggered by Ops. For instance, the traffic will not be drained from a site unless Ops needs to perform operations: xxx Some of the lifecyle can be applied to one slice/shard of the a service. This is what happens during a blue-green update. xxx xxx SW \u0026amp; Data Handling Schema Rationale In this Kubernetes environment no difference is made between configuration data/configmap and actual software. It is considered readonly. Reverting to a previous version means reverting the docker image and the config map.\n Data is stateful Data, for instance the content of a database. Part of an update may involve changing a database schema, migrating data. Hence is important to provide backup mechanism in order to be able to perform a rollback.\n Upgrading may also involved changing number of pod involved into a database cluster. Retriggering the data sharding/replication may require creating dedicated CRDs (for instance etcd\u0026hellip;)\n "
},
{
"uri": "https://keleustes.github.io/kustomize/",
"title": "Kustomize",
"tags": [],
"description": "This is the keleustes kustomize POC page",
"content": "Kustomize Rationale Investigate the feasibility of converting deckhand functions to kustomize. For that purpose improvments have been made and proposed to the kustomize community. CRD have been created and added to the underlying Kubernetes Cluster. Note that using CRDs does not force the Airship architecture to migrate to operator architecture immediatly. Some efforts has been spent into getting the OpenAPIV3Schema in the deploy/crds/xxx.yaml to be as accurate as possible. This helps getting kubectl version syntax verification. Still the real syntax check will be performed into helmv3 since it verifies the syntax of the values obtained from layering the \u0026ldquo;override\u0026rdquo; values on top of the \u0026ldquo;default\u0026rdquo; values provided in the chart. This is not a \u0026ldquo;replacement\u0026rdquo; for airship deckhand. This POC merely aims to highlight the potential advantages and pitfalls in going in that direction.\n Lessons Learned Layering It is possible to support the current airship layering (global, type, site). The three subfolders have be created. Each kustomization.yaml is using an entry \u0026ldquo;base\u0026rdquo;. Check airsloop Substitutions kustomize supports variables by default\n Regex is using $(xxx) format. Improvments have been proposed to address what kustomize could not do by default: PR simple variable:\n Definition in the kustomization.yaml: global Tree and structure can be inlined: inline Variable value extracted from a catalog CRD: value simple inlining:\n Definition in the kustomization.yaml: global Tree and structure can be inlined: inline Variable value extracted from a catalog CRD: value parent inlining (for multipass replacement) are currently used the following way:\n Definition in the kustomization.yaml: global Tree and structure can be inlined: inline Variable value extracted from a catalog CRD: value Remaining issues Extract from The Bad: No parameters \u0026amp; templates. The same property that makes kustomize applications so readable, can also make it very limiting. For example, I was recently trying to get the kustomize CLI to set an image tag for a custom resource instead of a Deployment, but was unable to. Kustomize does have a concept of “vars,” which look a lot like parameters, but somehow aren’t, and can only be used in Kustomize’s sanctioned whitelist of field paths. I feel like this is one of those times when the solution, despite making the hard things easy, ends up making the easy things hard.\n Documentation ReadTheDocs Readme Associated GIT Repos kustomize airship-deckhand Build kustomize Build git clone -b allinone https://github.com/keleustes/kustomize.git GO111MODULE=on go build -o $HOME/bin/kustomize cmd/kustomize/main.go Branches master is aligned with upstream/master inline contains the enhanced inlining, for instance tree inlining and parent-inline autov contains the automatic variable declaration diamond contains the ability to have diamond import of the base folder. Important for the tree reorganization slicecase contains small bug fix. allinone is built out of all the above branches. \n"
},
{
"uri": "https://keleustes.github.io/armada-operator/children/operator/",
"title": "Deployment Flows",
"tags": ["wiki"],
"description": "",
"content": " Deployment Flows CRDs in Deckhand In the Airship used for production deployment, the SiteManifest could be viewed as a composition of Custom Resources:\n \u0026ldquo;Drydock\u0026rdquo; CR describing the kubernetes nodes, network, kublet. \u0026ldquo;ArmadaChart\u0026rdquo; CR describing the location of the helm chart and the helm chart value override. Shipyard, Drydock and Armada are orchestrating the order of deployment:\n BareMetal OS, Network Docker, Kubelet, Kubernetes Storage, CEPH Applications for instance MariaDB In the subsequent drawings, red arrows represent APIs and CLI calls (arship or kubernetes calls), blue arrows represents events generated by K8s.\nCRDs in ETCD This view here describe how ETCD could potentially fullfill the role of Deckhand assuming that:\n \u0026ldquo;History\u0026rdquo; of the CRDs is ensured. A rollback to a previous version should be possible. \u0026ldquo;Sequencement\u0026rdquo; of the deployment is ensured. Helm Chart for CEPH needs to run before Helm Chart for MariaDB CRD Process Flows Deployment Flow The main principal is:\n Create all the charts in ETCD and \u0026ldquo;disable\u0026rdquo; the reconcile feature Changing the admin_state to enable triggers the Reconciliation/Installation of the HelmChart Either a ChartGroup or an Argo workflow enables the ArmadaCharts in the proper order. Upgrade/Rollback Flow Changing the targetVersion in the ArmadaChart triggers the Upgrade/Rollback of the HelmChart Either a ChartGroup or an Argo workflow update the targetVersion attribute of the ArmadaCharts in the proper order. the \u0026ldquo;targetVersion\u0026rdquo; attribute usage will potentially completly remove the need for the enabled/disabled state. Moving the \u0026ldquo;targetVersion\u0026rdquo; from 0 to 1 would indeed have the same effect than changing the admin state from disabled to enabled. Backup Flow Create an ArmadaBackupLocation CRD Pointing the backupLocation in the ArmadaChart to the previous CRD triggers the backup of the stateful data associated with the HelmChart Either a ChartGroup or an Argo workflow update the backupLocation attribute of the ArmadaCharts in the proper order. Restore Flow Create an ArmadaBackupLocation CRD Pointing the restoreLocation in the ArmadaChart to the previous CRD triggers the restore of the stateful data associated with the HelmChart Either a ChartGroup or an Argo workflow update the restoreLocation attribute of the ArmadaCharts in the proper order. "
},
{
"uri": "https://keleustes.github.io/armada-operator/",
"title": "Armada",
"tags": [],
"description": "This is the keleustes armada-operator POC page",
"content": "Armada Rationale Investigate the feasibility of converting airship-armada functions into a Kubernetes operator pattern. This is not a \u0026ldquo;replacement\u0026rdquo; for airship armada. This POC merely aims to highlight the potential advantages and pitfalls in going in that direction.\n Lessons Learned Simular problems discussed by the community The following links are trying to address similar problems\n Mirantis AppController Facilitate API orchestration CRD usages Schema validation: openAPIV3 schema validation is supported by kubectl. The schema can be generated out of the go code: generated schema generation mechanism go code DeepCopy and structured types in golang: During the CRD definition, the system especially during deep-copy generation does not really support generic yaml object such as interface{}. The Values of the ArmadaChart that would leave untyped (interface{}) when using go had to be modified to use a well defined struct: ArmadaChartValue. This increases the complexity very rapidely. We could not get a []byte construct to be used instead. Moreover, any struct not beeing well defined as in AVConf, gives ability to the system to be more permissive but the corresponding data is not accessible in the struct return to the reconcile method Fundamentally, all the different constructs for the values.yaml of all the charts developped for airship and openstack-helm, are beeing funnel through the construct. Even the teams did an outstanding job of consolidation of those charts, outsiders are spotted causing very quickly issues and prevent a kubectl apply -f xxx.yaml to work. In the partical cases of the ArmadaChart, the feature overlapping with the work done by the helmv3 team: schema validation Operator Usage Ownership of objects: The armada-operator code was largely coming for the operator-fwk helm implemention, especially the watch and the way the owner is added to the top k8s objects deployed by tiller owner The armada-operator can now decide what to do when an object owned by an ArmadaChart is deleted which helps for consistency. What to do if a user ran a \u0026lsquo;kubectl edit\u0026rsquo; command in the back of armada, or if a stateful set created by an armada chart is beeing deleted. Should the object be recreated or the status of the armadachart set to \u0026ldquo;inconsistent\u0026rdquo;. Kubernetes Event Handling: One of the main arship-armada feature is to be able to detect when the resources deployed by an helm chart are available, timeout and potentially delete the release if something went wrong: airship-armada Since the armada-operator is listening on all the object owned by an ArmadaChart, it was possible to emulate the feature: An event detecting a change of state triggers a reconcile: event When the resources deployed by the helm release, the ArmadaChart status goes from \u0026ldquo;running\u0026rdquo; to \u0026ldquo;ready/deployed\u0026rdquo;: ready RBAC Implication: Kubernetes RBAC and service accounts are used. Because the armada-operator oftens ends up running helm charts which are creating new service accounts, the rights provided to the armada-operator looks kind of extensive: roles Deployment of the operator itself: The operator is deployed in kubernetes itself. If using helm, we have some kind of chicken-egg issue. Otherwise the operator can be deployed using simple kubectl HelmV2 vs HelmV3 Tiller dependency: As for the event handling, the bulk of the behavior was provided by the operator-fwk: helm and then adapter in the armada-operator: armada-operator Tiller is using an in-memory storage Tiller ReleaseServer construct is also local to the operator: local Once helmv3 client library is released, it will be possible to leverage instead of a local tiller release manager as for helm v2. Helm Golang code dependency: Using the go client code instead of the helm execute brings use the power of using go code but also brings dependencies issues on the helm code structure. We had to account for the improvments and refactorexing done by the helm team: v2 tag chart struct in helmv2 v3 tag chart struct in helmv3 Multithreading and Concurrency One Armada Operator per namespace: TBD Underlying resource state management One of the key feature of armada is to be able to figure if the resources deployed through helm install or helm upgrade are ready to be used.\nDocumentation ReadTheDocs Readme Associated GIT Repos armada-operator airship-armada Build armada-operator Build git clone -b kube15 https://github.com/keleustes/armada-operator.git make Branches master supports kubernetes 1.14.x and helm 2.13.x kube15 supports kubernetes 1.15.x and helm 2.14.x helmv3 supports kubernetes 1.15.x and helm 3.0.x beta1 Important Issues and Release Notes kubernetes CRD schema validation helm Rendering Engine. Seems that the rendering engine can not be modified in helmV3. That ability of helmv2 is used to render the object and add an owner. \n"
},
{
"uri": "https://keleustes.github.io/oslc-operator/children/flowcharts/",
"title": "FlowCharts",
"tags": ["wiki"],
"description": "",
"content": " Greenfield Deployment Schema Rationale Ops Team need to deploy a new service. If the service is unhealty, it gets removed. If the service is healty, it reaches the operational. Brownfield Change Schema Rationale Ops Team need to:\n Use Case 1: remove a service. Use Case 2: update a service. Use Case 3: rollback a service. Once the traffic is drain:\n Use Case 1: the service is removed. Use Case 2: the service is updated. Use Case 3: the service is rollback. Once the update/rollback is performed, the traffic is rollout.\n Notes In order to perform a rollback or an update, the tools needs to have access to credentials. This means that the rollback and update pods need to be deployed with the rest of the helmchart in order to be able to have config and secrets recreated. and access the environment variables. We need to be able to be able to selectivly render the files which are matching the phase. Drain the traffic from a service, especially if we need to do blue/green upgrade means changing the nginx setup. How to do rolling upgrade when data is actually persisted in a database. Changing the database schema would mean:\n Stopping the MariaDB replication on one of the node. Ensure that \u0026ldquo;upgrade schema\u0026rdquo; script happens against that one pod. Getting the pod running the new software to access that new pod. Ref Doc: https://www.weave.works/blog/how-to-correctly-handle-db-schemas-during-kubernetes-rollouts Test the functionality of the service with the new schema Rollout the rest of the traffic. Above flow may be too complicated.\n Preventing access to the other services could be enough ? Requirement on backward compatibility of the schema. How does Openstack service itself to do a rolling upgrade. Is comple LCM Phase Breakdown The following list attempts provide a fine grain view of the K8s resources required during the LCM phases.\nInstall phase Following Jobs need to be performed:\n svc/templates/job-db-init.yaml Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml Upgrade phase Following Jobs need to be performed:\n svc/templates/job-db-backup.yaml (TODO) Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml Rollback phase Following Jobs need to be performed:\n svc/templates/job-db-restore.yaml (TODO) Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml TrafficRollout phase Following Jobs need to be performed:\n tbd Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml TrafficDrain phase Following Jobs need to be performed:\n tbd Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml Uninstall phase Following Jobs need to be performed:\n svc/templates/job-db-drop.yaml Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml Test phase Following Jobs need to be performed:\n tbd Access to the following resources is needed:\n svc/templates/secret-db.yaml svc/templates/secret-ingress-tls.yaml svc/templates/secret-keystone.yaml svc/templates/configmap-bin.yaml svc/templates/configmap-etc.yaml To sort svc/templates/bin/_bootstrap.sh.tpl svc/templates/bin/_db-sync.sh.tpl svc/templates/job-bootstrap.yaml svc/templates/job-db-sync.yaml svc/templates/job-image-repo-sync.yaml svc/templates/job-ks-endpoints.yaml svc/templates/job-ks-service.yaml svc/templates/job-ks-user.yaml svc/templates/job-rabbit-init.yaml svc/templates/network_policy.yaml svc/templates/pod-rally-test.yaml "
},
{
"uri": "https://keleustes.github.io/airship-treasuremap/",
"title": "TreasureMap",
"tags": [],
"description": "This is the keleustes treasuremap POC page",
"content": "TreasureMap Rationale Investigate the feasibility of converting airship-treasuremap site descriptions into kustomize site descriptions (Kubernetes CRD based). The kustomize layering are beeing leverage (global, type, site). This is not a \u0026ldquo;replacement\u0026rdquo; for airship treasuremap. This POC merely aims at highlighting what would have to be done to adapt treasuremap to tools such as kustomize, argo\u0026hellip;\n Lessons Learned Data Layering, Substitions and Validation It is possible to support the current airship layering (global, type, site). The three subfolders have be created. Each kustomization.yaml is using an entry \u0026ldquo;base\u0026rdquo;. Check airsloop\n kustomize supports variables by default\n Regex is using $(xxx) format. Improvments have been proposed to address what kustomize could not do by default: PR simple variable:\n Definition in the kustomization.yaml: global Tree and structure can be inlined: inline Variable value extracted from a catalog CRD: value simple inlining:\n Definition in the kustomization.yaml: global Tree and structure can be inlined: inline Variable value extracted from a catalog CRD: value parent inlining (for multipass replacement) are currently used the following way:\n Definition in the kustomization.yaml: global Tree and structure can be inlined: inline Variable value extracted from a catalog CRD: value Some of the OpenAPI V3 construct are not supported yet by CRDs.\n AdditionalProperties are allowed in specific conditions, check additionalProperties Definitions and Refs are not supported: definitions AnyOf construct anyOf Operator experience kubectl get act --all--namespaces provides the user with a view at a glance of his deployment. TBD Documentation ReadTheDocs Readme Associated GIT Repos airship-treasuremap airship-treasuremap Test airsloop site rendering Build Check that your configuration is correct. Invokes kustomize build and compare the output with a previously generated output. Be sure to have build the allinone version of kustomize.\nmake rendering-test-airsloop Deploy the CRDs without operators make deploy-airsloop kubectl get all act --all-namespaces Branches master site manifests which require the inline function but autovar feature is not used. autovar autovar feature of kustomize is enabled, hence no need for 3000 lines of vars and varreferences: Important Issues and Release Notes kubernetes The definition of the CRD will soon be using v1 instead of v1beta1. The schema will be mandatory and the handling of the unknown field will change.\n CRD schema validation Pruning Unknown Field \n"
},
{
"uri": "https://keleustes.github.io/armada-operator/children/todo/",
"title": "Ideas",
"tags": ["wiki"],
"description": "",
"content": " Ideas Can we use the \u0026ldquo;finalizer\u0026rdquo; to implement the \u0026ldquo;protected\u0026rdquo; feature of ArmadaChart. We need a consistent handling of Conditions, Events and Status and behavior which are easily understood by people understanding K8s. \u0026ldquo;kubectl get act\u0026rdquo; should be able to return a synthetic view as good as the \u0026ldquo;kubectl get pod\u0026rdquo; does. The Status object should be accurate enough for the DAG in the Argo Workflow to stay simple. Questions Should the deletion of ArmadaChartGroup trigger deletion of ArmadaChart How to deal with the \u0026ldquo;prefix\u0026rdquo; feature of the \u0026ldquo;ArmadaManifest\u0026rdquo;. Do we still need the ArmadaManifest Should we add a \u0026ldquo;workflows\u0026rdquo; field in the ArmadaManifest. Armada would not be using keystone anymore but Kubernetes RBAC. What are the impacts ? History of ArmadaChart can be implemented two ways: a. Reuse K8s ControllerRevision code. b. Reuse Helm storage Driver.\n"
},
{
"uri": "https://keleustes.github.io/oslc-operator/",
"title": "LifeCycle Manager",
"tags": [],
"description": "This is keleustes OpenstackService LifeCycle operator POC",
"content": "OpenstackLCM Rationale Create a Kubernetes Operator able to orchestrate draining traffic, upgrading, testing, traffic rollout\u0026hellip; The main lesson learned from this POC is the ability to use the helm rendering to provide a unified way of delivering scripts onto a platform and at runtime selectively decide which part of the chart to render according to the state of the service which lifecycle may not match exactly the lifecycle of a helm chart (i.e install, upgrade, rollback, delete). The Argo team published since an new subproject to argoproj called argo-rollout which seems really promising. Currently rely on replacing the standard StatefulSet by an Argo Rollout deployement object\n Lessons Learned Multi facet charts TBD Traffic Draining TBD ScaleUp / ScaleDown TBD Documentation Readme Argo CD Argo Rollout WeaveWorks Flagger Associated GIT Repos oslc-operator Build oslc-operator Build git clone -b kube15 https://github.com/keleustes/oslc-operator.git make Branches master supports kubernetes 1.14.x and helm 2.13.x kube15 supports kubernetes 1.15.x and helm 2.14.x \n LifeCycle Openstack Service Lifecycle Schema Rationale Some transitions from one phase/stage to the other are autonomous: Start to test if install successful) xxx Some transitions from one phase/stage to the other but be triggered by Ops. For instance, the traffic will not be drained from a site unless Ops needs to perform operations: xxx Some of the lifecyle can be applied to one slice/shard of the a service.\n FlowCharts Greenfield Deployment Schema Rationale Ops Team need to deploy a new service. If the service is unhealty, it gets removed. If the service is healty, it reaches the operational. Brownfield Change Schema Rationale Ops Team need to: Use Case 1: remove a service. Use Case 2: update a service. Use Case 3: rollback a service. Once the traffic is drain: Use Case 1: the service is removed.\n Oslc CRD LifeCycle Modelisation Design Oslc CRD The CRD Oslc definition is available here: Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml Oslc Controller TBD SubResources The current PhaseCRD are currently standalone CRDs. This provides control to the phase-controller on those objects.\n Phase CRD Phase Modelisation Design Phase CRD The CRD Phase definition is available here: Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml Phase Controller The current POC created one CRD per phase. Most of the attributes of those CRDs are common.\n"
},
{
"uri": "https://keleustes.github.io/oslc-operator/children/oslc_crd/",
"title": "Oslc CRD",
"tags": ["wiki"],
"description": "",
"content": " LifeCycle Modelisation Design Oslc CRD The CRD Oslc definition is available here:\n Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml Oslc Controller TBD\nSubResources The current PhaseCRD are currently standalone CRDs. This provides control to the phase-controller on those objects. At one point we will have to weight if we need to keep those CRDs or simply consider the Phase as nodes of an Argo Workflow.\n"
},
{
"uri": "https://keleustes.github.io/cluster-api/",
"title": "Cluster-API",
"tags": [],
"description": "This is the keleustes cluyster-api POC page",
"content": "Cluster API Rationale This POC of a baremetal cluster-api provider was started before the official cluster-api-provider-baremetal This POC was relying on Airship Drydock/Maas. Thanks to the work done by the metal3.io team, most of content of this POC has become irrelevant. The remaining questions that can be kind of answered by this POC are: Is DivingBell still relevant. Is the cluster-api in charge of updating, rebooting machines when the machine specs are updated. How much of Promenade is still relevant. cluster-api indeed helping to save the kubeadm token into configmaps to help machine to join. Can kustomize be used to build the cluster-api MachineList and Cluster CRDs from Airship Site definitions. This is not a \u0026ldquo;replacement\u0026rdquo; for airship drydock. This POC merely aims to highlight the potential advantages and pitfalls in going in that direction.\n Lessons Learned Baremetal machine greenfield deployment TBD Baremetal machine brownfield upgrade and update Docker, Kubelet: TBD OS: TBD Kubernetes state maintenance kubeproxy, api-server,\u0026hellip;: TBD Documentation TBD Associated GIT Repos cluster-api cluster-api-provider-baremetal cluster-api-provider-airship \n"
},
{
"uri": "https://keleustes.github.io/kubeadm/",
"title": "Kubeadm",
"tags": [],
"description": "This is the kubeadm POC page",
"content": "Kubeadm Rationale Investigate the feasibility of converting promenade functions to kubeadm. For that purpose improvments have been made and proposed to the kubeadm community. This is not a \u0026ldquo;replacement\u0026rdquo; for airship promenade. This POC merely aims to highlight the potential advantages and pitfalls in going in that direction.\n Lessons Learned Kubernetes High Availibity TBD Certificates Management Promenade is currently in charge of deploying certificates. The teams implementating the cluster-api have different approaches:\n Azure Cluster API implementation Kubernetes Software Upgrade Promenade is installing a set of packages:\n Azure Package Installation kubelet, kubeadm and kubectl upgrades AWS Cluster API Documentation ReadTheDocs Readme Associated GIT Repos kubeadm airship-promenade \n"
},
{
"uri": "https://keleustes.github.io/argo/",
"title": "Argo",
"tags": [],
"description": "This is the keleustes argo POC page",
"content": "Argo Rationale Investigate the feasibility of converting shipyard functions to argo. For that purpose improvments have been made and proposed to the argo community. This is not a \u0026ldquo;replacement\u0026rdquo; for airship shipyard. This POC merely aims to highlight the potential advantages and pitfalls in going in that direction.\n Lessons Learned Workflow TBD Documentation ReadTheDocs Readme Associated GIT Repos argo airship-shipyard \n"
},
{
"uri": "https://keleustes.github.io/oslc-operator/children/phase_crd/",
"title": "Phase CRD",
"tags": ["wiki"],
"description": "",
"content": " Phase Modelisation Design Phase CRD The CRD Phase definition is available here:\n Its Spec which is update through kubectl: Spec Its Status which is updated by the operator and accessible through kubectl describe: Status Its definition made out of the two above components: Definition The yaml version of the CRD: Yaml Phase Controller The current POC created one CRD per phase. Most of the attributes of those CRDs are common. At one point we will have to weight the pros and cons of having only one PhaseCRD or one TestPhaseCRD, TrafficRolloutCRD.\nSubResources The PhaseCRD is built using the following principles:\n The current PhaseCRD are currently standalone CRDs. This provides control to the phase-controller on those objects. At one point we will have to weight if we need to keep those CRDs or simply consider the Phase as nodes of an Argo Workflow.\n The PhaseCRD is currently loading a yaml file. This could be a helm chart. The PhaseCRD is then owner of the subresources described in the yaml file.\na. an argo Workflow b. another CRD such as an Helm3Release or an EtcdBackup c. a simple kubernetes job, pod (utility container, script\u0026hellip;)\n The key aspect here is to be able to monitor the end of those tasks as well as the success/failure.\n The HelmV2 renderer is supported. HelmV3 POC support has been added\n "
},
{
"uri": "https://keleustes.github.io/about/",
"title": "About KELEUSTES",
"tags": [],
"description": "",
"content": " About WIP\n"
},
{
"uri": "https://keleustes.github.io/categories/",
"title": "Categories",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://keleustes.github.io/tags/",
"title": "Tags",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://keleustes.github.io/tags/wiki/",
"title": "Wiki",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://keleustes.github.io/categories/wiki/",
"title": "Wiki",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://keleustes.github.io/categories/wip/",
"title": "Wip",
"tags": [],
"description": "",
"content": ""
}]