Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup ota reference manual #635

Merged
merged 2 commits into from
Dec 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 35 additions & 36 deletions source/reference-manual/ota/advanced-tagging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,26 +3,26 @@
Advanced Tagging
================

Some users incorporate non-trivial workflows that can require advanced tagging
techniques. These workflows can be handled in the :ref:`ref-factory-definition`.
You may sometimes need to incorporate a non-trivial workflow requiring advanced tagging techniques.
These workflows are handled in the :ref:`ref-factory-definition`.

Terminology
-----------

**Platform Build** - A build created by a change to the LmP (lmp-manifest.git
or meta-subscriber-overrides.git). This is the base OS image.
**Platform Build**: A build created by a change in ``lmp-manifest.git`` or ``meta-subscriber-overrides.git``.
This is the base OS image.

**Container Build** - A build created by a change to containers.git.
**Container Build**: A build created by a change to ``containers.git``.

**Target** - This an entry in a factory's TUF targets.json file. It represents
what should be thought of as an immutable combination of the Platform build's
OSTree hash + the output of a Container build.
**Target**: An entry in a Factory's TUF ``targets.json``.
It represents an immutable combination of the Platform build's OSTree hash with the output of a container build.

**Tag** - A Target has a "custom" section where with a list of Tags. The
tags can be used to say things like "this is a development build"
or this is a "production" build.
**Tag**: A Target has a "custom" section where with a list of Tags.
**Tag:**: A user defined attribute of a Target designating its intended usage.
Tags are defined in the "custom" section of a Target.
They can be used to e.g. distinguish between "development" versus "production" builds.

Scenario 1: A new platform build that re-uses containers
Scenario 1: A New Platform Build That Re-Uses Containers
--------------------------------------------------------

A Factory is set up with the normal ``main`` branch::
Expand All @@ -36,8 +36,8 @@ A Factory is set up with the normal ``main`` branch::
refs/heads/main:
- tag: main

You'd like to introduce a new ``stable`` branch from the LmP but have it use
the latest containers from master. This can be done with::
You want to introduce a new ``stable`` branch from the LmP, but have it use the latest containers from ``main``.
This can be done with::

lmp:
tagging:
Expand All @@ -52,7 +52,7 @@ the latest containers from master. This can be done with::
- tag: main
- tag: stable

Consider this pseudo targets example::
Consider this pseudo Targets example::

targets:
build-1:
Expand All @@ -64,18 +64,18 @@ Consider this pseudo targets example::
compose-apps: foo:v2, bar:v2
tags: main

If a change to the stable branch was pushed to the LmP, a new
target, build-3, would be added. The build logic would then look through
the targets list to find the most recent ``main`` target so that
it can copy those compose-apps. This would result in a new target::
If a change to the stable branch was pushed to the LmP, a new Target, ``build-3``, would be added.
The build logic would then look through the Targets list to find the most recent ``main`` Target.
It can then copy the compose-apps from that most recent Target.
This would result in a new Target::

build-3:
ostree-hash: NEWHASH
compose-apps: foo:v2, bar:v2
tags: stable

On the other hand, there might also be a new container build for ``main``.
In this case the build logic will produce two new targets::
In this case, the build logic will produce two new Targets::

build-4: # for stable it will be based on build-3
ostree-hash: NEWHASH
Expand All @@ -87,13 +87,13 @@ In this case the build logic will produce two new targets::
compose-apps: foo:v3, bar:v3
tags: main

Scenario 2: Multiple container builds using the same platform
Scenario 2: Multiple Container Builds Using the Same Platform
-------------------------------------------------------------

This scenario is the reverse of the previous one. A factory might have a
platform build tagged with ``main``. However, there are two versions of
containers being worked on: ``main`` and ``foo``. This could be handled
with::
This scenario is the reverse of the first one.
A Factory might have a platform build tagged with ``main``.
However, there are two versions of containers being worked on: ``main`` and ``foo``.
This could be handled with::

lmp:
tagging:
Expand All @@ -108,13 +108,13 @@ with::
- tag: foo
inherit: main

Scenario 3: Multiple teams, different cadences
Scenario 3: Multiple Teams, Different Cadences
----------------------------------------------

Some organizations may have separate core platform and application teams. In
this scenario, it may be desirable to let each team move at their own decoupled
paces. Furthermore, the application team might have stages(branches) of
development they are working on. This could be handled with something like::
Your organization may have separate core platform and application teams.
In this scenario, it may be desirable to let each team move at their own pace.
Furthermore, the application team might have stages(branches) of development they are working on.
This can be handled with something like::

lmp:
tagging:
Expand All @@ -131,12 +131,11 @@ development they are working on. This could be handled with something like::
- tag: qa
inherit: main

This scenario is going to produce ``main`` tagged builds that have no
containers, but can be generically verified. Then each containers.git branch
will build Targets and grab the latest ``main`` tag to base its platform
on. **NOTE:** Changes to ``main`` don't cause new container builds. In
order to get a container's branch updated to the latest ``main`` a user
would need to push an empty commit to containers.git to trigger a new build::
This will produce ``main`` tagged builds that have no containers, but can be generically verified.
Then, each ``containers.git`` branch will build Targets and grab the latest ``main`` tag to base its platform on.

It is important to note that changes to ``main`` do not cause new container builds.
In order to get a container's branch updated to the latest ``main``, push an empty commit to ``containers.git`` to trigger a new build::

# from branch qa
git commit --allow-empty -m 'Pull in latest platform changes from main'
38 changes: 24 additions & 14 deletions source/reference-manual/ota/aktualizr-lite.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
.. _ref-aktualizr-lite:

aktualizr-lite
Aktualizr-Lite
==============

The default OTA client shipped with the Linux microPlatform is ``aktualizr-lite``. This client is a build variant of the Aktualizr project. It is targeting users who wish to have the security aspects of TUF but do not want the complexity of Uptane.
The default OTA client shipped with the Linux® microPlatform is ``aktualizr-lite``.
This client is a build variant of the Aktualizr project.
It is for those who wish to have the security aspects of TUF, but without the complexity of Uptane.

.. figure:: /_static/diagrams/aktualizr-lite/aktualizr-lite.png
:align: center
Expand All @@ -14,7 +16,9 @@ There are two modes ``aktualizr-lite`` supports.
Daemon Mode (Default)
---------------------

This is the default mode of ``aktualizr-lite`` in the Linux microPlatform. It is a systemd service, which is enabled by default on Community Factory images. Additionally, the daemon will only be enabled in a Personal or Enterprise factory after ``lmp-device-register`` has sucessfully registered your device. The daemon will periodically check for new updates, and apply them when found.
This is the default mode of ``aktualizr-lite`` in the Linux microPlatform.
The daemon will only be enabled in a Factory after ``lmp-device-register`` has successfully registered your device.
The daemon periodically checks for new updates, and applies them when found.

To disable daemon mode:

Expand Down Expand Up @@ -49,49 +53,55 @@ Disabling daemon mode is not recommended nor supported, but running ``aktualizr-
View Current Status
~~~~~~~~~~~~~~~~~~~

You can run ``sudo aktualizr-lite status`` to view the current status of the device.
To view the current status of the device::

sudo aktualizr-lite status

Fetch and List Updates
~~~~~~~~~~~~~~~~~~~~~~

This command will refresh the targets metadata from the OTA server, and present you with a list of available targets which can be applied.
This will refresh the Targets metadata from the OTA server, and present you with a list of available Targets::

``sudo aktualizr-lite list``
sudo aktualizr-lite list

Apply Latest Update
~~~~~~~~~~~~~~~~~~~

This command will apply the latest available update to the device. This includes both OSTree and Docker app targets.
This will apply the latest available update to the device.
This includes both OSTree and Docker app Targets::

``sudo aktualizr-lite update``
sudo aktualizr-lite update

Apply Specific Update
~~~~~~~~~~~~~~~~~~~~~

If you would like to update to a specific build number, you can use the following command.
To update to a specific build number::

``sudo aktualizr-lite update --update-name <build_number>``

.. note::

This can only be performed when the original and update targets are under the same tag. In case the update is tagged differently, it is required to switch tags before running this command.
This can only be performed when the original and update Targets are under the same tag.
In case the update is tagged differently, it is required to switch tags before running this command.

Configuration
-------------

Configuration update methods
Configuration Update Methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

* Editing ``/var/sota/sota.toml`` on a device
* Adding or editing an existing configuration snippet e.g. ``/etc/sota/conf.d/z-50-fioctl-01.toml`` on a device
* Running *fioctl* from any host ``fioctl devices config <device>``, see :ref:`ref-configuring-devices` for more details
* Adding or editing an existing configuration snippet, e.g. ``/etc/sota/conf.d/z-50-fioctl-01.toml`` on a device
* Running ``fioctl devices config <device>`` from a host.
See :ref:`ref-configuring-devices` for more details.

.. _ref-aktualizr-lite-params:

Parameters
~~~~~~~~~~

The following are aktualizr-repo's configuration parameters that can be useful to play with, the presented values are the default one.
The following are aktualizr-repo's configuration parameters that can be useful to modify.
The presented values are the default one.

.. code-block::

Expand Down
52 changes: 22 additions & 30 deletions source/reference-manual/ota/ci-targets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,29 +3,25 @@
CI Targets
==========

The point of FoundriesFactory is to create Targets. The magic
is how a ``git push`` can make this all happen. Because
of how easy these are to create, there is another type of Target,
:ref:`Production Targets <ref-production-targets>`, that are intended
to be used for production devices. However, it's almost always
originally created by CI when:
The point of a Factory is to create Targets.
A ``git push`` is all that is required to trigger a Target to be built.

* A change is pushed to source.foundries.io
* A CI job is triggered in ci.foundries.io
* The CI job signs the resulting TUF ``targets.json`` with the Factory's
"online" targets signing key.
There is another type of Target, :ref:`Production Targets <ref-production-targets>`, that are intended to be used for production devices.
However, it is almost always originally created by the CI when:

The online targets signing key ID can be seen in the TUF root
metadata:
* A change is pushed to ``source.foundries.io``
* A CI job is triggered in ``ci.foundries.io``
* The CI job signs the resulting TUF ``targets.json`` with the Factory's "online" Targets signing key.

The online Targets signing key ID can be seen in the TUF root metadata:

.. code-block:: bash

$ fioctl get https://api.foundries.io/ota/repo/<FACTORY>/api/v1/user_repo/root.json \
| jq '.signed.roles.targets.keyids[0]'

Due to the number of changes and development branches a typical
customer may have, the TUF targets metadata can grow to include large
numbers of Targets. There are two ways these are dealt with:
Due to all the changes and branches you may have, the TUF Targets metadata can grow to include a large number of Targets.
There are two ways this can be dealt with:

* Condensed Targets
* Target Pruning
Expand All @@ -35,23 +31,20 @@ numbers of Targets. There are two ways these are dealt with:
Condensed Targets
-----------------

Each device is configured to take updates for Targets that include
a specific tag. Because of this, the most of the Targets in the
CI ``targets.json`` aren't relevant and can be ignored by the device.
In order to provide smaller TUF metadata payloads, the Foundries
back-end employs a trick referred to as "condensed targets".
Each device is configured to take updates for Targets that include a specific tag.
Because of this, most of the Targets in ``targets.json`` are not relevant for any given device and can be ignored by it.
In order to provide smaller TUF metadata payloads, the backend employs what is referred to as "condensed Targets".

Condensed Targets are produced by taking the raw CI version and then
producing condensed versions for each unique tag. For example, the
raw targets.json might include::
Condensed Targets are produced by taking the raw CI version, and then producing condensed versions for each unique tag.
For example, a raw ``targets.json`` might include::

version=1, tag=master
version=2, tag=devel
version=3, tag=devel
version=4, tag=devel,experimental

The back-end will actually produce three different condensed versions
that are each signed with the Factory's online targets signing key::
The back-end will actually produce three different condensed versions.
Each one is signed with the Factory's online Targets signing key::

# targets-master.json
version=1, tag=master
Expand All @@ -64,19 +57,18 @@ that are each signed with the Factory's online targets signing key::
version=3, tag=experimental
version=4, tag=experimental

The :ref:`device gateway <ref-ota-architecture>` is then able to serve
an optimized targets.json to each CI device.
The :ref:`device gateway <ref-ota-architecture>` is then able to serve an optimized ``targets.json`` to each CI device.

Target Pruning
--------------

Each successful build appends a Target to targets.json. Eventually
it grows too large and users will see an error in CI::
Each successful build appends a Target to ``targets.json``.
Eventually it can grow too large, and you would see an error in CI::

Publishing local TUF targets to the remote TUF repository
== 2022-03-24 00:44:18 Running: garage-sign targets push --repo /root/tmp.gkfCEF
| An error occurred
| com.advancedtelematic.libtuf.http.CliHttpClient$CliHttpClientError: ReposerverHttpClient|PUT|http/413|https://api.foundries.io/ota/repo/andy-corp/api/v1/user_repo/targets%7C<html>
| <head><title>413 Request Entity Too Large</title></head>

When this happens, it's time to :ref:`prune targets <ref-troubleshooting_request-entity-too-large>`.
When this happens, it is time to :ref:`prune targets <ref-troubleshooting_request-entity-too-large>`.
10 changes: 8 additions & 2 deletions source/reference-manual/ota/configuring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,16 @@ Two interesting things that can be done with this include ``on-changed`` and ``u
}
EOF

The ``on-changed`` parameter allows to run commands upon receiving configuration fragments. By default, it only runs commands from ``/usr/share/fioconfig/handlers``, so a custom handler should be created in this folder, see `fioconfig_git.bb <https://github.com/foundriesio/meta-lmp/blob/main/meta-lmp-base/recipes-support/fioconfig/fioconfig_git.bb>`_ for reference.
The ``on-changed`` parameter allows to run commands upon receiving configuration fragments.
The `fioconfig` may only run "trusted" scripts from the ``/usr/share/fioconfig/handlers/`` folder to protect devices from arbitrary script execution.
As such, a custom handler should be created in this folder.
See `fioconfig_git.bb <https://github.com/foundriesio/meta-lmp/blob/main/meta-lmp-base/recipes-support/fioconfig/fioconfig_git.bb>`_ for reference.

.. tip::
It is possible to use the ``on-changed`` parameter to run commands outside of the ``/usr/share/fioconfig/handlers`` folder by running ``fioconfig daemon --unsafe-handlers``. This would allow running configurations as::
For testing purposes, it is possible to use the ``on-changed`` parameter to run commands outside of the ``/usr/share/fioconfig/handlers`` folder.
This is done by running ``fioconfig daemon --unsafe-handlers``.
We do not recommend doing that in production.
This allows running configurations as::

cat >tmp.json <<EOF
{
Expand Down
Loading
Loading