-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mavros in its own build stage #241
base: main
Are you sure you want to change the base?
Mavros in its own build stage #241
Conversation
Are you using QEMU for the arm64 build?
Nevermind, it doesn't seem like cross-compilation is well-supported by colcon |
Yes. And yeah, I briefly considered something in that space. I could imagine running that step in build-platform-native python3, and then either cross-compiling or qemu-compiling the resulting autogenerated files. But I can't imagine cleanly isolating that step within the build process. |
&& apt-get -q -y upgrade \ | ||
&& apt-get -q install --no-install-recommends -y \ | ||
git \ | ||
gosu \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm open to leaving gosu in here if we split this into its own temporary Dockerfile, but if we add it to the original, we should consider a separate PR (or just add it with #226)
.github/workflows/docker.yaml
Outdated
@@ -18,6 +18,47 @@ env: | |||
PUSH: ${{ (github.event_name != 'pull_request') && (github.repository == 'Robotic-Decision-Making-Lab/blue') }} | |||
|
|||
jobs: | |||
mavros: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to host this image if we aren't going to use it as the base image for the rest of the Dockerfile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do think the mavros build image needs to be built and cached ... somewhere. And I don't think all of those built layers are cached in ci
... it only caches the layer which includes the copied files?
But there's still some poking to ensure that mavros
cache is loaded by the ci
layers.
Adding these links here for future reference: |
Sorry I thought I was looking at #220 😁 |
This pull request is in conflict. Could you fix it @amarburg? |
You're not entirely wrong ... to not just burn CI minutes, disabling arm builds on this branch. Can test performance on the desktop. |
d111006
to
e04dbc1
Compare
This pull request is in conflict. Could you fix it @amarburg? |
e04dbc1
to
5b77695
Compare
Merged in docker-bake.hcl changes from #266 |
2e151bb
to
ae3c175
Compare
FYI, GitHub just added arm64 runners. We might be able to build without using BuildX now. EDIT: Here is another announcement with a bit more info. |
I've submitted a PR mavros!1988 with the changes needed to support Jazzy and Rolling. I'll see what other changes need to be made to get the binaries released. |
Out of interest, a wrote an MWE for building independent By my reading, |
Just as a note to myself, last tested this branch 11/8. To build, I needed pull the The latter was too old and mavros branch "ros2" would not build. Works in sim. Now need to check if #324 still works. Have not checked mainline to see if upstream binaries have been updated. |
The Jazzy binaries have been updated and are available. I was expecting a Rolling sync last week, but it seems like that got delayed. |
Changes Made
This is an experimental PR which isolates the Mavros/Mavlink build in its own (cacheable) build stage. It builds on #220.
Mavros is built in a standalone workspace (
~/ws_mavros
) in a build stage. This workspace is then copied into theci
stage. Subsequent builds inws_blue
then extend this mavros workspace rather than the base ROS environment.Besides increasing complexity of the build process, the major negative is that more functionality is pulled into the ci stage, specifically:
ws_mavros
can be built and owned by the non-root user.ws_mavros
workspace is copied into the "ci" stage as a dependency of blue (given mavros would normally would be installed by rosdep anyway if a binary was available)(on further reflection, these aren't strictly required, depending on how pedantic we want to be about what's in/out of the ci stage:
noble
-based images include theubuntu
user anyway, this isn't a big deal.robot
stage. Appropriate--ignore-key
s would need to be added to the call torosdep
inci
robot
.)
I've also committed my
docker-bake.hcl
to simplify repeatability. I have not updated the Github action to use buildx bake. This conflicts with #266, and if/when that's merged it will require some manual rebasing in this branch.Cross-compile performance
All builds on an i7-11800H, running
docker buildx bake mavros --no-cache
with appropriate platform -- only one platform at a time. Package time are reported by colcon, "total" is the value reported bydocker buildx
for thecolcon build
step."amd64 bare metal" is a with rolling installed directly onto the same laptop
Raspberry Pis are on a clean distribution of Bookworm 64bit "Lite" (no GUI), with no active or passive cooling on the board (so thermal throttling may be a factor)
Yikes.
n.b. The Pi4 would OOM (even with 8GB of swap), so the job was run with
-j2
Testing
Please provide a clear and concise description of the testing performed.