-
Notifications
You must be signed in to change notification settings - Fork 3
/
index.xml
2729 lines (1931 loc) · 186 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Docker Pirates ARMed with explosive stuff</title>
<link>https://blog.hypriot.com/index.xml</link>
<description>Recent content on Docker Pirates ARMed with explosive stuff</description>
<generator>Hugo -- gohugo.io</generator>
<language>en-us</language>
<lastBuildDate>Thu, 25 Jul 2019 06:31:00 +0200</lastBuildDate>
<atom:link href="https://blog.hypriot.com/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Releasing HypriotOS 1.11.0: Docker 19.03.0 CE from Raspberry Pi Zero to 4 B</title>
<link>https://blog.hypriot.com/post/releasing-HypriotOS-1-11/</link>
<pubDate>Thu, 25 Jul 2019 06:31:00 +0200</pubDate>
<guid>https://blog.hypriot.com/post/releasing-HypriotOS-1-11/</guid>
<description><p><strong>We&rsquo;re proud to announce our 1.11.0 release of HypriotOS - the fastest way to get Docker up and running on any Raspberry Pi.</strong></p>
<p><img src="https://blog.hypriot.com/images/release-1-11/raspberry-pi-4-b.jpg" alt="Raspberry Pi 4 B" /></p>
<p></p>
<h2 id="features-of-hypriotos">Features of HypriotOS</h2>
<p><strong>Latest Docker Engine 19.03.0 Community Edition</strong> </br>
You can use the latest features of the freshly-baked Docker Engine 19.03.0 that is still warm. It includes the Swarm Mode, which allows high availability of services in a multi-node cluster within just a few simple commands.</p>
<p><strong>Up to date with Raspbian Lite Buster</strong> </br>
You can run HypriotOS with an up-to-date OS and Linux kernel which is in sync with the current Raspbian Lite Buster running a Linux kernel 4.19.58.</p>
<p><strong>Support for the complete Raspberry Pi family</strong> </br>
You can run HypriotOS on every model of the Raspberry Pi family - we&rsquo;re supporting Pi 1, 2, 3, the 3 B+, Zero and even the Compute Module and the new Raspberry Pi 4 B.</p>
<p><strong>Easy flashing and configuration</strong> </br>
We improved our <a href="https://github.com/hypriot/flash">flash tool</a>, which puts HypriotOS onto a SD card that is ready to boot from with a single command. With additional command line options you can customize HypriotOS during the flash operation to have the best out-of-the-box first-boot experience.
HypriotOS includes <a href="https://cloudinit.readthedocs.io/en/18.3/">cloud-init</a>, which makes the first boot of your Raspberry Pi customizable and you even are able connecting it to your Wi-Fi network during boot.
After booting, you can find the Raspberry Pi at your network with a simple <code>ping black-pearl.local</code> – no more searching for IP addresses required thanks to the integrated Avahi service discovery.</p>
<p><strong>Enhanced security out of the box</strong> </br>
We think that security should be shipped out-of-the-box. We make HypriotOS more secure without you even noticing it. For instance, there is no built-in &ldquo;root&rdquo; user. Also, the default user &ldquo;pirate&rdquo; (password &ldquo;hypriot&rdquo;) is can be customized or removed before the first boot. Just look at the file <code>/boot/user-data</code>. You can add your public SSH key, disable password logins and specify a different user account before you even boot your Raspberry Pi. WiFi can be customized and enabled to have Docker up and running through the air without attaching a keyboard and monitor.</p>
<p><strong>Smaller than Raspbian Lite</strong> </br>
Even though HypriotOS 1.11.0 is fully packed with the complete and latest Docker tool set, it now comes at a size smaller than the tiniest version of Raspbian (&ldquo;Raspbian Lite&rdquo;).</p>
<p>Please see all details in the <a href="https://github.com/hypriot/image-builder-rpi/releases/tag/v1.11.0">release notes</a>.</p>
<h2 id="quick-start">Quick start</h2>
<p><strong>Download our <a href="https://github.com/hypriot/flash">flash tool</a></strong></p>
<pre><code>curl -O https://raw.githubusercontent.com/hypriot/flash/2.3.0/flash
chmod +x flash
sudo mv flash /usr/local/bin/flash
</code></pre>
<p><strong>Now run this command to flash HypriotOS 1.11.0</strong></p>
<pre><code>flash https://github.com/hypriot/image-builder-rpi/releases/download/v1.11.0/hypriotos-rpi-v1.11.0.img.zip
</code></pre>
<p><strong>Afterwards, put the SD card into the Raspberry Pi and power it. That&rsquo;s all to get HypriotOS up and running!</strong></p>
<h3 id="next-steps">Next steps</h3>
<p>If you want to connect to the Raspberry Pi, run</p>
<pre><code>ssh pirate@black-pearl.local
</code></pre>
<p>with password &ldquo;hypriot&rdquo;.</p>
<h3 id="flash-with-wi-fi-settings-for-pi-zero-pi-3-pi-4">Flash with Wi-Fi settings for Pi Zero / Pi 3 / Pi 4</h3>
<p>If you want the Raspberry Pi Zero (or Pi 3 or Pi 4) to connect directly to your Wi-Fi after boot, change the hostname of the Raspberry Pi and more, edit <code>/boot/user-data</code> of the SD card and have a look at <a href="https://blog.hypriot.com/faq/#wifi">our FAQ</a>. Alternatively, checkout the parameters of the <a href="https://github.com/hypriot/flash">Hypriot flash tool</a> that also allows you to define your own cloud-init user-data template file which will be copied onto the SD image for you:</p>
<pre><code>flash -n myHOSTNAME -u wifi.yml https://github.com/hypriot/image-builder-rpi/releases/download/v1.11.0/hypriotos-rpi-v1.11.0.img.zip
</code></pre>
<h2 id="feedback-please">Feedback, please</h2>
<p>As always, use the comments below to give us feedback and share it on Twitter or Facebook.</p>
<p>Please send us your feedback on our <a href="https://gitter.im/hypriot/talk">Gitter channel</a> or tweet your thoughts and ideas on this project at <a href="https://twitter.com/HypriotTweets">@HypriotTweets</a>.</p></description>
</item>
<item>
<title>NVIDIA Jetson Nano - Docker optimized Linux Kernel</title>
<link>https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/</link>
<pubDate>Sat, 04 May 2019 03:57:21 +0200</pubDate>
<guid>https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/</guid>
<description><p>Despite the fact that the NVIDIA Jetson Nano DevKit comes with Docker Engine preinstalled and you can run containers just out-of-the-box on this great AI and Robotics enabled board, there are still some important kernel settings missing to run Docker Swarm mode, Kubernetes or k3s correctly.</p>
<p><img src="https://blog.hypriot.com/images/nvidia-jetson-nano-build-kernel-docker-optimized/jetson-nano-board-docker-whale.jpg" alt="jetson-nano-board-docker-whale.jpg" /></p>
<p>So, let&rsquo;s try to fix this&hellip;</p>
<p></p>
<h3 id="analyzing-the-linux-kernel">Analyzing the Linux Kernel</h3>
<p>In my last blogpost <a href="https://blog.hypriot.com/post/verify-kernel-container-compatibility/">Verify your Linux Kernel for Container Compatibility</a>, I already shared all the details how you can easily verify the Linux kernel for all Container related kernel settings. So, this first part of analyzing the capabilities of the stock Linux kernel 4.9.x provided by NVIDIA is already done and documented. And this was an easy task as well, so everyone who&rsquo;s interested in these details can repeat the task at his/her own device.</p>
<p>Let&rsquo;s recap what we did found. Especially there is one important setting which will be used for networking. This feature called &ldquo;IPVLAN&rdquo;, is required for Docker Swarm mode and it&rsquo;s also used for networking in Kubernetes and k3s.</p>
<h3 id="building-your-own-linux-kernel">Building your own Linux Kernel</h3>
<p>Anyway, even when we&rsquo;d like to include or change only a single kernel setting, we have to customize the kernel configuration and have to compile and build our own Linux kernel. This is typically a common task for a desktop Linux system, but can be pretty ugly and cumbersome if you have to build the kernel for an Embedded Device.</p>
<p>When we look back to all the other NVIDIA Jetson boards, like the TK1, TX1 and TX2, this requires a second Linux machine, running Ubuntu 14.04 or 16.04 on an Intel CPU. Then setting up a complete build system for cross-compiling and all these stuff. Honestly, this is a well-known approach for an Embedded Developer, but the good thing now for the Jetson Nano DevKit this is not required any more.</p>
<p>Here the good news: you can customize and build your own Linux kernel directly on the Jetson Nano DevKit! You only need an internet connection and some time to perform all steps on your own. BTW, and this is another chance to learn something new.</p>
<h3 id="preparing-the-build-environment">Preparing the Build Environment</h3>
<p>Before we&rsquo;re able to compile the Linux kernel on the Jetson Nano, we have to make sure we do have all required build tools installed. Here is all it takes, with a fast internet connection this is done within a few minutes only.</p>
<pre><code class="language-bash">$ sudo apt-get update
$ sudo apt-get install -y libncurses5-dev
</code></pre>
<h3 id="download-linux-kernel-sources-for-jetson-nano">Download Linux Kernel Sources for Jetson Nano</h3>
<p>Next, we&rsquo;ll need to find and download the sources for the Linux kernel for the Jetson Nano DevKit directly from the NVIDIA website. The current version as writing this blogpost is NVIDIA Linux4Tegra Release r32.1 or short L4T 32.1. Just follow this link <a href="https://developer.nvidia.com/embedded/linux-tegra">https://developer.nvidia.com/embedded/linux-tegra</a> and select at topic &ldquo;32.1 Driver Details&rdquo; the referenced download link for &ldquo;Jetson Nano&rdquo;, &ldquo;SOURCES&rdquo; and &ldquo;BSP Sources&rdquo;.</p>
<p>We can also directly download this package to the Jetson Nano. But please be aware that this download link can change over time, so verify it carefully.</p>
<pre><code class="language-bash">$ cd
$ mkdir -p nano-bsp-sources
$ cd nano-bsp-sources
$ wget https://developer.download.nvidia.com/embedded/L4T/r32_Release_v1.0/jetson-nano/BSP/Jetson-Nano-public_sources.tbz2
$ ls -alh Jetson-Nano-public_sources.tbz2
-rw-rw-r-- 1 pirate pirate 133M Mar 16 06:46 Jetson-Nano-public_sources.tbz2
</code></pre>
<p>Now extract the kernel source package &ldquo;kernel_src.tbz2&rdquo; from the downloaded file.</p>
<pre><code class="language-bash">$ tar xvf Jetson-Nano-public_sources.tbz2 public_sources/kernel_src.tbz2
$ mv public_sources/kernel_src.tbz2 ~/
$ cd
$ ls -alh ~/kernel_src.tbz2
-rw-r--r-- 1 pirate pirate 117M Mar 13 08:45 /home/pirate/kernel_src.tbz2
</code></pre>
<p>You may now free some disk space and remove all the downloads, as we don&rsquo;t need it any more.</p>
<pre><code class="language-bash">$ rm -fr ~/nano-bsp-sources/
</code></pre>
<p>Last step, please extract the kernel source tree.</p>
<pre><code class="language-bash">$ cd
$ tar xvf ./kernel_src.tbz2
</code></pre>
<h3 id="compile-the-default-linux-kernel">Compile the default Linux Kernel</h3>
<p>Cool, we have now all the Linux kernel source tree for the Jetson Nano DevKit downloaded and extracted.</p>
<p>As the next step, I&rsquo;d recommend to first compile the default unmodified kernel in order to verify that we do have all the build dependencies installed and this way, we&rsquo;ll also get familiar with the kernel compiling.</p>
<p>Before we can start the compile job, we have to make sure to use the correct kernel configuration file. This file &ldquo;.config&rdquo; is missing in the provided kernel source tree, but we can directly get it from our running Linux kernel on the Jetson Nano. This .config file can be found as kernel file at &ldquo;/proc/config.gz&rdquo; in a compressed form.</p>
<pre><code class="language-bash">$ cd ~/kernel/kernel-4.9
$ zcat /proc/config.gz &gt; .config
</code></pre>
<p>Now, let&rsquo;s verify the content of the Linux kernel .config file.</p>
<pre><code class="language-bash">pirate@jetson-nano:~/kernel/kernel-4.9$ head -10 .config
#
# Automatically generated file; DO NOT EDIT.
# Linux/arm64 4.9.140 Kernel Configuration
#
CONFIG_ARM64=y
CONFIG_64BIT=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_MMU=y
CONFIG_DEBUG_RODATA=y
CONFIG_ARM64_PAGE_SHIFT=12
...
</code></pre>
<p>As you can see, it&rsquo;s a Kernel Configuration for Linux kernel version 4.9.140 and for ARM 64-bit architecture.</p>
<p>Start the kernel compile job. As we do have 4x cores available on the Nano, we&rsquo;d like to keep the CPU busy and using 5x parallel compile tasks.</p>
<pre><code class="language-bash">$ make prepare
$ make modules_prepare
# Use 5x parallel compile tasks
# Compile kernel as an image file
$ time make -j5 Image
...
real 28m13,235s
user 91m48,700s
sys 7m46,240s
# List newly compiled kernel image
$ ls -alh arch/arm64/boot/Image
-rw-rw-r-- 1 pirate pirate 33M May 4 00:14 arch/arm64/boot/Image
# Compile all kernel modules
$ time make -j5 modules
...
real 29m15,621s
user 92m41,176s
sys 8m18,404s
</code></pre>
<p>The Nano CPU&rsquo;s are pretty busy while compiling the kernel and kernel modules.</p>
<p><img src="https://blog.hypriot.com/images/nvidia-jetson-nano-build-kernel-docker-optimized/jetson-nano-board-compile-kernel.jpg" alt="jetson-nano-board-compile-kernel.jpg" /></p>
<p>The build/compile job will take around 60 minutes in total, but the good thing is, all happens directly on your Jetson Nano DevKit. No other expensive equipment is required at all, just an internet connection and some of your time.</p>
<h3 id="install-our-newly-built-linux-kernel-and-modules">Install our newly built Linux Kernel and Modules</h3>
<p>After these pretty long compile jobs for generating our own Linux kernel and kernel modules, we are ready to install the kernel and verify if it&rsquo;s able to boot correctly. Therefore we should make a backup of the old kernel first, then install the new kernel and also install all newly built kernel modules.</p>
<p>Before we install the new kernel and boot the Jetson Nano, let&rsquo;s check the default Linux kernel version. Then we can compare it later to our own kernel.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ uname -a
Linux jetson-nano 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux
</code></pre>
<p>Here is also a ASCIINEMA recording of a <code>check-config.sh</code> verification done with the default kernel.
<a href="https://asciinema.org/a/244237?t=0:44"><img src="https://asciinema.org/a/244237.svg" alt="asciicast" /></a></p>
<p>As we can see, we do have a Linux kernel version &ldquo;4.9.140-tegra&rdquo;. This one was compiled at &ldquo;Wed Mar 13 00:32:22 PDT 2019&rdquo; and it&rsquo;s the default kernel provided by NVIDIA for the Jetson Nano.</p>
<p>Now, install our new kernel and kernel modules.</p>
<pre><code class="language-bash"># Backup the old kernel image file
$ sudo cp /boot/Image /boot/Image.original
# Install modules and kernel image
$ cd ~/kernel/kernel-4.9
$ sudo make modules_install
$ sudo cp arch/arm64/boot/Image /boot/Image
# Verify the kernel images
pirate@jetson-nano:~/kernel/kernel-4.9$ ls -alh /boot/Image*
-rw-r--r-- 1 root root 33M May 4 00:55 /boot/Image
-rw-r--r-- 1 root root 33M May 4 00:49 /boot/Image.original
</code></pre>
<p>Now, reboot the Nano and check the kernel again.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ uname -a
Linux jetson-nano 4.9.140 #1 SMP PREEMPT Sat May 4 00:12:56 CEST 2019 aarch64 aarch64 aarch64 GNU/Linux
</code></pre>
<p>As you can see, our newly compiled kernel is working. The kernel version has changed to &ldquo;4.9.140&rdquo;, note the missing trailing &ldquo;-tegra&rdquo; which indicates this build is a custom build. And the compile date/time has also changed to &ldquo;Sat May 4 00:12:56 CEST 2019&rdquo;.</p>
<p><strong>Hint:</strong> Please remember, every time you do change a kernel setting and compile a new kernel, you have to install the kernel image file AND the kernel modules.</p>
<h3 id="customizing-the-linux-kernel-configuration">Customizing the Linux Kernel Configuration</h3>
<p>When it comes to the point to modify or customize the Linux kernel configuration, then this can get pretty complicated when you don&rsquo;t know where to start. First of all, it&rsquo;s a very bad idea to edit the .config file directly with an editor. Please, NEVER DO THIS - seriously!</p>
<p>The correct way to customize the kernel .config file is, to use the right tooling. One tool which is already built-in and available even in your bash shell (works also via ssh), is the tool <code>menuconfig</code>. Therefore we already installed the build dependency &ldquo;libncurses5-dev&rdquo; at the beginning.</p>
<p>I don&rsquo;t want to go into all details on how to use <code>menuconfig</code>, therefore here are the basic commands to start it and then I did recorded an ASCIINEMA to change the setting for &ldquo;IPVLAN&rdquo;. I think then you&rsquo;ll should get a good idea how this works.</p>
<pre><code class="language-bash"># Backup the kernel config
$ cd ~/kernel/kernel-4.9
$ cp .config kernel.config.original
$ make menuconfig
</code></pre>
<p>ASCIINEMA recording on how to include the &ldquo;IPVLAN&rdquo; kernel setting.
<a href="https://asciinema.org/a/244246?t=1:15"><img src="https://asciinema.org/a/244246.svg" alt="asciicast" /></a></p>
<p>Finally let&rsquo;s re-compile the kernel and the kernel modules and install them, like we did before.</p>
<pre><code class="language-bash">$ cd ~/kernel/kernel-4.9
# Prepare the kernel build
$ make prepare
$ make modules_prepare
# Compile kernel image and kernel modules
$ time make -j5 Image
$ time make -j5 modules
# Install modules and kernel image
$ sudo make modules_install
$ sudo cp arch/arm64/boot/Image /boot/Image
</code></pre>
<p>Reboot the Nano and check the kernel again.</p>
<h3 id="fast-forward-fully-container-optimized-kernel-configuration">Fast Forward - Fully Container Optimized Kernel Configuration</h3>
<p>As you have learned here in this tutorial, you&rsquo;re now able to apply more and more settings to your kernel configuration. But this will take some time for sure.</p>
<p>In order to save you a lot of time and efforts, I&rsquo;ve already optimized the Linux kernel in all details. Here you can find a public <a href="https://gist.githubusercontent.com/DieterReuter/a7d07445c9d62b45d9151c22b446c59b/">Gist at Github</a> with my resulting kernel .config. You can download it directly to your Nano and compile your own Linux kernel with this configuration.</p>
<pre><code class="language-bash"># Download the fully container optimized kernel configuration file
$ cd ~/kernel/kernel-4.9
$ wget https://gist.githubusercontent.com/DieterReuter/a7d07445c9d62b45d9151c22b446c59b/raw/6decc91cc764ec0be8582186a34f60ea83fa89db/kernel.config.fully-container-optimized
$ cp kernel.config.fully-container-optimized .config
# Prepare the kernel build
$ make prepare
$ make modules_prepare
# Compile kernel image and kernel modules
$ time make -j5 Image
$ time make -j5 modules
# Install modules and kernel image
$ sudo make modules_install
$ sudo cp arch/arm64/boot/Image /boot/Image
</code></pre>
<p>Now, reboot the Nano and check the kernel again.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ uname -a
Linux jetson-nano 4.9.140 #2 SMP PREEMPT Sat May 4 02:17:23 CEST 2019 aarch64 aarch64 aarch64 GNU/Linux
pirate@jetson-nano:~$ ls -al /boot/Image*
-rw-r--r-- 1 root root 34381832 May 4 03:13 /boot/Image
-rw-r--r-- 1 root root 34048008 May 4 00:49 /boot/Image.original
</code></pre>
<p>ASCIINEMA recording of the final run of <code>check-config.sh</code> with the fully optimized kernel for running Containers on the Jetson Nano DevKit.
<a href="https://asciinema.org/a/244250?t=1:13"><img src="https://asciinema.org/a/244250.svg" alt="asciicast" /></a></p>
<p><strong>Result: An almost perfect Linux kernel to run Containers on the NVIDIA Jetson Nano!</strong></p>
<p><img src="https://blog.hypriot.com/images/nvidia-jetson-nano-build-kernel-docker-optimized/jetson-nano-board-docker-running2.jpg" alt="jetson-nano-board-docker-running2.jpg" /></p>
<h3 id="conclusion">Conclusion</h3>
<p>As you could learn with this short, but highly technical tutorial, you&rsquo;re able to compile your own customized Linux kernel directly on the Jetson Nano DevKit without the need of an additional and maybe expensiv development machine. All can be done within an hour or two, and you have now the ability to change kernel settings whenever you want to. Just customize the kernel .config file, compile a new kernel and kernel modules and install it on your Nano.</p>
<p>This way you&rsquo;re now able to optimize the kernel for all your needs. For running Containers on the Nano with the help of Docker, Kubernetes or k3s, you&rsquo;re now well prepared and know how to do this by yourself.</p>
<p>Once the Linux kernel is fully optimized with all important Container related kernel settings, you can run Docker Swarm mode, Kubernetes and k3s with all features on that great ARM board from NVIDIA.</p>
<p>Finally, May the 4th be with You!
<img src="https://blog.hypriot.com/images/nvidia-jetson-nano-build-kernel-docker-optimized/may-the-4th-be-with-you.jpg" alt="may-the-4th-be-with-you.jpg" /></p>
<h3 id="feedback-please">Feedback, please</h3>
<p>As always use the comments below to give us feedback and share it on Twitter or Facebook.</p>
<p>Please send us your feedback on our <a href="https://gitter.im/hypriot/talk">Gitter channel</a> or tweet your thoughts and ideas on this project at <a href="https://twitter.com/HypriotTweets">@HypriotTweets</a>.</p>
<p>Dieter <a href="https://twitter.com/Quintus23M">@Quintus23M</a></p></description>
</item>
<item>
<title>Verify your Linux Kernel for Container Compatibility</title>
<link>https://blog.hypriot.com/post/verify-kernel-container-compatibility/</link>
<pubDate>Sun, 28 Apr 2019 08:48:50 -0700</pubDate>
<guid>https://blog.hypriot.com/post/verify-kernel-container-compatibility/</guid>
<description><p>Are you sure whether your Linux kernel is able to run Containers in an optimal way, or if there are still some missing kernel settings which will lead to some strange issues in the future?</p>
<p><img src="https://blog.hypriot.com/images/verify-kernel-container-compatibility/400px-NewTux.svg.png" alt="400px-NewTux.svg.png" /></p>
<p>Normally you don&rsquo;t have to bother about this question. When you&rsquo;re using Docker and Containers on a modern Linux system or on a public cloud offering, this has been already optimized by the Linux distribution or your cloud provider. But when you start using Containers on Embedded Devices you should verify this carefully.</p>
<p>So, let&rsquo;s learn how easy it is to verify it by yourself&hellip;</p>
<p></p>
<h3 id="how-can-i-verify-the-linux-kernel-for-container-compatibility">How can I verify the Linux Kernel for Container Compatibility?</h3>
<p>Typically this is really an easy task, as soon as you know the right tools.</p>
<p>For running Containers you&rsquo;ll need some basic settings applied to your Linux kernel. Some settings are mandatory and some others are optional and only used for specific use cases. But let&rsquo;s see how we can use the right tools.</p>
<p>At the Docker open source project you can find a great bash script which does all these tests on your Linux kernel configuration and tells you within a few seconds all the required details. The script is able to read the kernel config live from a running kernel or directly from a kernel .config file as well. Now you can imagine you can verify the container compatibility also from a remote device.</p>
<h4 id="download-check-config-sh-script">Download <code>check-config.sh</code> script</h4>
<p>Let&rsquo;s download the bash script <a href="https://github.com/moby/moby/blob/master/contrib/check-config.sh">check-config.sh</a> directly from the Moby project (yes, this is the new name for the Docker open source project).</p>
<pre><code class="language-bash">$ wget https://github.com/moby/moby/raw/master/contrib/check-config.sh
$ chmod +x check-config.sh
</code></pre>
<h4 id="verify-the-linux-kernel-directly">Verify the Linux Kernel directly</h4>
<p>If you have your Linux system available you can download and run the script directly without any parameters.</p>
<pre><code class="language-bash">$ ./check-config.sh
</code></pre>
<p>Then you&rsquo;ll get a detailled output with all kernel settings which are important for running containers.</p>
<p>If you want to verify a kernel from a remote system, you could also first extract the Linux kernel config on this system and analyse it later.</p>
<pre><code class="language-bash"># extract the .config from a running kernel
$ zcat /proc/config.gz &gt; kernel.config
$ ls -al kernel.config
-rw-rw-r-- 1 pirate pirate 165739 Apr 28 07:26 kernel.config
</code></pre>
<p><strong>Hint:</strong> On some Linux systems like Raspbian for the Raspberry Pi, the kernel .config is only available as a kernel module. Then you have to load the module first, using the command <code>sudo modprobe configs</code>.</p>
<h4 id="verify-the-linux-kernel-from-a-config-file">Verify the Linux Kernel from a .config file</h4>
<p>The kernel .config is a readable configuration file which is used to compile a new Linux kernel. Typically it will get embedded into your new kernel and therefore you can read it from the running kernel. It&rsquo;s available as a file at <code>/proc/config.gz</code> in a compressed form, so we have to use <code>zcat</code> to extract the .config file in clear text.</p>
<pre><code class="language-bash">$ zcat /proc/config.gz | head -10
#
# Automatically generated file; DO NOT EDIT.
# Linux/arm64 4.9.140 Kernel Configuration
#
CONFIG_ARM64=y
CONFIG_64BIT=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_MMU=y
CONFIG_DEBUG_RODATA=y
CONFIG_ARM64_PAGE_SHIFT=12
...
</code></pre>
<p>Next, let&rsquo;s run the <code>check-config.sh</code> script again and read all kernel configs from the file.</p>
<pre><code class="language-bash">$ ./check-config.sh kernel.config
</code></pre>
<h3 id="verify-the-linux-kernel-on-nvidia-jetson-nano-devkit">Verify the Linux Kernel on NVIDIA Jetson Nano DevKit</h3>
<p>As a real life example let&rsquo;s now verify the Linux kernel of the brand-new Jetson Nano DevKit from NVIDIA. I already wrote a blogpost about how to install Linux for the Nano board, see here <a href="https://blog.hypriot.com/post/nvidia-jetson-nano-intro/">NVIDIA Jetson Nano Developer Kit - Introduction</a>.</p>
<p>First we&rsquo;ll check the Linux kernel version and we can see, it&rsquo;s a current LTS kernel 4.9.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ uname -a
Linux jetson-nano 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux
</code></pre>
<p>Now, let&rsquo;s run the <code>check-config.sh</code> script on the Nano and determine all the Container related kernel settings. We&rsquo;ll get the complete output as colored text. From the screenshots here we can clearly see which of the required and optional kernel settings are already applied for the Nano&rsquo;s Linux kernel.</p>
<p>In the first section &ldquo;Generally Necessary&rdquo; all the mandatory kernel settings are listed, and for the Nano this is completely green, all is perfect.</p>
<p><img src="https://blog.hypriot.com/images/verify-kernel-container-compatibility/kernel-checkconfig-nano1.jpg" alt="kernel-checkconfig-nano1.jpg" /></p>
<p>Then in the second section &ldquo;Optional Features&rdquo; we can see that most Container related settings are applied, but a few are missing.</p>
<p>Not all of these are really important to have, but when we look into the &ldquo;Network Drivers&rdquo; I would recommend to include all in the kernel to avoid issues. For example, if you want to use Docker Swarm mode you have to know that <code>CONFIG_IPVLAN</code> is mandatory - this kernel can&rsquo;t run Swarm mode correctly!</p>
<p>For &ldquo;Storage Drivers&rdquo; you can typically ignore the missing settings for <code>aufs</code> and <code>zfs</code> as long as you don&rsquo;t required to use these, same is true for <code>devicemapper</code>.</p>
<p><img src="https://blog.hypriot.com/images/verify-kernel-container-compatibility/kernel-checkconfig-nano2.jpg" alt="kernel-checkconfig-nano2.jpg" /></p>
<p>Here I&rsquo;d also like to present the output as pure ASCII text so you can easily analyse (search, copy&amp;paste) it later.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ ./check-config.sh
info: reading kernel config from /proc/config.gz ...
Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: enabled
(cgroup swap accounting is currently enabled)
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: missing
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_NET_CLS_CGROUP: enabled
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_NF_TARGET_REDIRECT: missing
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: missing
- CONFIG_IP_VS_PROTO_UDP: missing
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
- &quot;overlay&quot;:
- CONFIG_VXLAN: enabled
Optional (for encrypted networks):
- CONFIG_CRYPTO: enabled
- CONFIG_CRYPTO_AEAD: enabled
- CONFIG_CRYPTO_GCM: enabled
- CONFIG_CRYPTO_SEQIV: enabled
- CONFIG_CRYPTO_GHASH: enabled
- CONFIG_XFRM: enabled
- CONFIG_XFRM_USER: enabled
- CONFIG_XFRM_ALGO: enabled
- CONFIG_INET_ESP: missing
- CONFIG_INET_XFRM_MODE_TRANSPORT: enabled
- &quot;ipvlan&quot;:
- CONFIG_IPVLAN: missing
- &quot;macvlan&quot;:
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_DUMMY: enabled
- &quot;ftp,tftp client in container&quot;:
- CONFIG_NF_NAT_FTP: enabled (as module)
- CONFIG_NF_CONNTRACK_FTP: enabled (as module)
- CONFIG_NF_NAT_TFTP: enabled (as module)
- CONFIG_NF_CONNTRACK_TFTP: enabled (as module)
- Storage Drivers:
- &quot;aufs&quot;:
- CONFIG_AUFS_FS: missing
- &quot;btrfs&quot;:
- CONFIG_BTRFS_FS: enabled (as module)
- CONFIG_BTRFS_FS_POSIX_ACL: enabled
- &quot;devicemapper&quot;:
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: missing
- &quot;overlay&quot;:
- CONFIG_OVERLAY_FS: enabled (as module)
- &quot;zfs&quot;:
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000
</code></pre>
<h3 id="conclusion">Conclusion</h3>
<p>With these easy steps we&rsquo;ve covered in this short blogpost, you&rsquo;re now able to verify if the Linux kernel you&rsquo;re using is able to run Docker, containerd, Kubernetes or k3s in an optimal way.</p>
<p>Just keep this in mind whenever you&rsquo;re discovering some strange errors with your Container Run-Time on a new Linux system. This is especially important when you run Containers on an Embedded Device. We&rsquo;ve discovered a lot of missing kernel settings in the early days with the Raspberry Pi. Even today the Raspberry Pi kernel which comes with the default Raspbian is not fully optimized to run Containers, therefore the image built from HypriotOS is still a better alternative when you wish to run Containers on these devices.</p>
<p>And even when Docker runs out-of-the-box on a brandnew device like it does on the Jetson Nano, it&rsquo;s always a good idea to verify the Linux kernel - prior to get into some strange errors.</p>
<p>In a later blogpost we&rsquo;ll optimize the stock Linux kernel for the brand-new NVIDIA Jetson Nano DevKit and we&rsquo;ll show you how to build your own customized kernel for this great board.</p>
<h3 id="feedback-please">Feedback, please</h3>
<p>As always use the comments below to give us feedback and share it on Twitter or Facebook.</p>
<p>Please send us your feedback on our <a href="https://gitter.im/hypriot/talk">Gitter channel</a> or tweet your thoughts and ideas on this project at <a href="https://twitter.com/HypriotTweets">@HypriotTweets</a>.</p>
<p>Dieter <a href="https://twitter.com/Quintus23M">@Quintus23M</a></p></description>
</item>
<item>
<title>Docker Engine on Intel Linux runs Arm Containers</title>
<link>https://blog.hypriot.com/post/docker-intel-runs-arm-containers/</link>
<pubDate>Sat, 27 Apr 2019 10:48:50 -0700</pubDate>
<guid>https://blog.hypriot.com/post/docker-intel-runs-arm-containers/</guid>
<description><p>Did you read the latest news from Docker about their newly announced technology partnership together with Arm, <a href="https://twitter.com/Docker/status/1121054608795688963">&ldquo;Docker and Arm Partner to Deliver Frictionless Cloud-Native Software Development and Delivery Model for Cloud, Edge, and IoT&rdquo;</a>?</p>
<p><img src="https://blog.hypriot.com/images/docker-intel-runs-arm-containers/arm-docker-logo.jpg" alt="arm-docker-logo.jpg" /></p>
<p>This is really a great ground-breaking news, as it will enable an improved development workflow. Build and test all your Arm containers on your Intel-based laptop or workstation. These new Arm capabilities will be available in <a href="https://www.docker.com/products/docker-desktop">Docker Desktop</a> products from Docker, both for MacOs and Windows, and for Docker’s commercial enterprise offerings. First technical details and roadmap will be announced next week at <a href="https://www.docker.com/dockercon/">DockerCon 2019</a> in San Francisco, so please stay tuned.</p>
<p>But wait, what about all the users who are directly working on a pure Linux environment? Well, here is the good news, the basic technology you need is already available and ready-to-use for you.</p>
<p>Yes, you could use it right away! Let&rsquo;s start it now&hellip;</p>
<p></p>
<h3 id="run-an-arm-container-with-docker-engine-on-intel">Run an Arm Container with Docker Engine on Intel</h3>
<p>I don&rsquo;t want to hold you back and bore you with a lot of background details, so I&rsquo;m going to show you how easy it is today.</p>
<p>First, let&rsquo;s start with a default Docker Engine installed on an Intel-based Linux system. For the sake of simplicity I&rsquo;d like to show it directly on the <a href="https://www.katacoda.com">Katacoda Training Platform</a>, then you are able to replay it without spending too much time. It just takes you a few seconds to spin up a complete Linux Docker environment, just a few click away.</p>
<p><strong>Step 1:</strong> Start the <a href="https://www.katacoda.com/contino/courses/docker/basics">Docker Tutorial</a></p>
<p>Next you have to click on &ldquo;START SCENARIO&rdquo; and the Linux Docker environment is ready for you.</p>
<p><img src="https://blog.hypriot.com/images/docker-intel-runs-arm-containers/katacoda-docker-tutorial-start.jpg" alt="katacoda-docker-tutorial-start.jpg" /></p>
<p><strong>Step 2:</strong> Run an Arm-based Container</p>
<p>In the bottom right box you&rsquo;ll find a Linux Terminal window where you can issue your CLI commands. It&rsquo;s a real Linux bash shell you can control through your browser and Docker is already installed on this machine.</p>
<p>As soon as we try to start an Arm-based Docker container,</p>
<pre><code class="language-bash">$ docker run -it dieterreuter/alpine-arm64:3.9
</code></pre>
<p>we&rsquo;ll get an cryptic error message <code>exec user process caused &quot;exec format error&quot;</code>, which basically tells us that this Docker Image tries to start a binary/executable which can&rsquo;t be run on the provided Intel processor.</p>
<p><img src="https://blog.hypriot.com/images/docker-intel-runs-arm-containers/katacoda-docker-tutorial-runarm1.jpg" alt="katacoda-docker-tutorial-runarm1.jpg" /></p>
<p><strong>Step 3:</strong> Run the magic command to enable Arm/Arm64 on Intel</p>
<pre><code class="language-bash">$ docker run --rm --privileged hypriot/qemu-register
</code></pre>
<p>This registers a few Qemu emulators inside of our Linux kernel with the help of the <code>binfmt</code> tool. This instructs the Linux loader to start the specific Qemu emulator program to run the binary/executable if it&rsquo;s not based on Intel. Here we register <code>/qemu-arm</code> for Arm 32-bit and <code>/qemu-aarch64</code> for Arm 64-bit.</p>
<p>Just to be precise, these emulators will be registered through a privileged Docker container. This is possible for all Linux kernel versions 4.9 and later. The emulators will be uploaded into memory, registered in the kernel and stay there persistent until you reboot your machine. This means you don&rsquo;t have to change anything inside of your Docker Images, all magic will be done by the Linux kernel on the host system!</p>
<pre><code class="language-bash">$ docker run --rm --privileged hypriot/qemu-register
Unable to find image 'hypriot/qemu-register:latest' locally
latest: Pulling from hypriot/qemu-register
fc1a6b909f82: Pull complete
247c87d40120: Pull complete
1e300bd4bcdc: Pull complete
79c54222eda0: Pull complete
7d0efdace32f: Pull complete
Digest: sha256:17931ba1f5362c6fbf7f364b32bec7e06e0c376571a9e3b2849dea18ce887c91
Status: Downloaded newer image for hypriot/qemu-register:latest
---
Installed interpreter binaries:
-rwxr-xr-x 3 root root 6192520 Apr 27 17:17 /qemu-aarch64
-rwxr-xr-x 4 root root 5606984 Apr 27 17:17 /qemu-arm
-rwxr-xr-x 2 root root 5987464 Apr 27 17:17 /qemu-ppc64le
---
Registered interpreter=qemu-aarch64
enabled
interpreter /qemu-aarch64
flags: OCF
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
---
Registered interpreter=qemu-arm
enabled
interpreter /qemu-arm
flags: OCF
offset 0
magic 7f454c4601010100000000000000000002002800
mask ffffffffffffff00fffffffffffffffffeffffff
---
Registered interpreter=qemu-ppc64le
enabled
interpreter /qemu-ppc64le
flags: OCF
offset 0
magic 7f454c4602010100000000000000000002001500
mask ffffffffffffff00fffffffffffffffffeffff00
---
$
</code></pre>
<p><strong>Step 4:</strong> Run an Arm-based Container successfully on Intel</p>
<p>Now let&rsquo;s start the same Arm-based Docker container again, but this time it actually works successfully.</p>
<pre><code class="language-bash">$ docker run -it dieterreuter/alpine-arm64:3.9
</code></pre>
<p><img src="https://blog.hypriot.com/images/docker-intel-runs-arm-containers/katacoda-docker-tutorial-runarm2.jpg" alt="katacoda-docker-tutorial-runarm2.jpg" /></p>
<p>When we run the command <code>uname -a</code> it tell us, we&rsquo;re running a Linux kernel 4.14.29 on Arm 64-bit architecture indicated as <code>aarch64</code>.</p>
<p>SUCCESS, our Arm 64-bit Docker Container even works on Intel CPU&rsquo;s !!!</p>
<p><strong>References:</strong> The source code of this magic registration Docker Image is fully open source and can be found at <a href="https://github.com/hypriot/qemu-register">https://github.com/hypriot/qemu-register</a>. I also updated it today to the latest available Qemu 4.0.0 release.</p>
<h3 id="conclusion">Conclusion</h3>
<p>As you could see, it&rsquo;s damn easy to configure your Intel-based Linux Docker Engine to run 32 or 64-bit Arm Containers in an emulation mode seemlessly. As long as you run a recent Linux kernel 4.9 or later it just works.</p>
<p>Basically this is the same emulation technology how <a href="https://www.docker.com/products/docker-desktop">Docker Desktop</a> is doing this behind the scenes on MacOS and Windows. BTW, this binfmt feature is already built-in in Docker Desktop since about April 2017.</p>
<p>In the end it&rsquo;s possible for a user to develop, build and test his/her Arm-based Docker Containers easily on a Intel-based Linux machine. And with the upcoming new features built-in into the Docker Engine this multi-architecture development workflow will get better and better over time.</p>
<p>For Linux users we can use at least all these basic features. And for Mac and Windows users, Docker will present an even better user experience, so stay tuned for <a href="https://www.docker.com/dockercon/">DockerCon 2019</a> in San Francisco next week.</p>
<h3 id="feedback-please">Feedback, please</h3>
<p>As always use the comments below to give us feedback and share it on Twitter or Facebook.</p>
<p>Please send us your feedback on our <a href="https://gitter.im/hypriot/talk">Gitter channel</a> or tweet your thoughts and ideas on this project at <a href="https://twitter.com/HypriotTweets">@HypriotTweets</a>.</p>
<p>Dieter <a href="https://twitter.com/Quintus23M">@Quintus23M</a></p></description>
</item>
<item>
<title>NVIDIA Jetson Nano - Upgrade Docker Engine</title>
<link>https://blog.hypriot.com/post/nvidia-jetson-nano-upgrade-docker/</link>
<pubDate>Mon, 22 Apr 2019 08:44:52 +0200</pubDate>
<guid>https://blog.hypriot.com/post/nvidia-jetson-nano-upgrade-docker/</guid>
<description><p>In our last blogposts about the <a href="https://blog.hypriot.com/post/nvidia-jetson-nano-intro/">NVIDIA Jetson Nano Developer Kit - Introduction</a> and <a href="https://blog.hypriot.com/post/nvidia-jetson-nano-install-docker-compose/">NVIDIA Jetson Nano - Install Docker Compose</a> we digged into the brand-new <strong>NVIDIA Jetson Nano Developer Kit</strong> and we know, that Docker 18.06.1-CE is already installed, but&hellip;</p>
<p><img src="https://blog.hypriot.com/images/nvidia-jetson-nano-docker-ce/jetson-desktop-login.jpg" alt="jetson-desktop-login.jpg" /></p>
<p>But, this isn&rsquo;t the latest available version of the Docker Engine. So, I&rsquo;d like to point you to a few different options on how to upgrade the Docker Engine to the very latest available version for the NVIDIA Jetson Nano.</p>
<p></p>
<h3 id="check-the-current-docker-version">Check the current Docker Version</h3>
<p>For this tutorial I&rsquo;m starting again with a freshly flashed SD card image.</p>
<p>Flashing from macOS just takes a few minutes with the Hypriot flash utility, which can be found here <a href="https://github.com/hypriot/flash">https://github.com/hypriot/flash</a>.</p>
<pre><code class="language-bash">$ time flash --device /dev/disk2 jetson-nano-sd-r32.1-2019-03-18.img
Is /dev/disk2 correct? y
Unmounting /dev/disk2 ...
Unmount of all volumes on disk2 was successful
Unmount of all volumes on disk2 was successful
Flashing jetson-nano-sd-r32.1-2019-03-18.img to /dev/rdisk2 ...
12.0GiB 0:03:40 [55.8MiB/s] [=======================================================================================&gt;] 100%
0+196608 records in
0+196608 records out
12884901888 bytes transferred in 220.275160 secs (58494575 bytes/sec)
Mounting Disk
real 3m47.866s
user 0m1.648s
sys 0m30.921s
</code></pre>
<p>Next we have to attach a computer monitor via HDMI cable, mouse and keyboard and of course an Ethernet cable in order to get an internet connect. Now connecting to a micro-USB power supply and follow the instruction on the screen to perform the initial setup of the NVIDIA Jetson Nano Developer Kit. This will take around 5 to 10 minutes and we do have a new Ubuntu 18.04 desktop running on that nice 64bit ARM Cortex-A57 developer board.</p>
<p><strong>Pro-Tip on using Docker client:</strong></p>
<p>As we have seen in the last blogpost, we have to use sudo when we&rsquo;re calling a docker command in the shell. This can be easily resolved, we only have to issue the following command ones.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ sudo usermod -aG docker pirate
</code></pre>
<p>Next, log out and log in again or just start a new shell and we don&rsquo;t have to use sudo any more for our docker commands.</p>
<p>Show the current version of the Docker Engine installed on the Nano.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ docker version
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.1
Git commit: e68fc7a
Built: Fri Jan 25 14:35:17 2019
OS/Arch: linux/arm64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.1
Git commit: e68fc7a
Built: Thu Jan 24 10:49:48 2019
OS/Arch: linux/arm64
Experimental: false
</code></pre>
<p>And here we see the version <code>18.06.1-ce</code> of the Docker Engine, which is installed on the Nano SD card image <code>jetson-nano-sd-r32.1-2019-03-18.img</code>. And yes, this is by far not the latest nor the securest available version at all.</p>
<h3 id="upgrade-docker-engine">Upgrade Docker Engine</h3>
<p>As we&rsquo;ve seen in the previous section, the installed Docker Engine version is 18.06.01-ce. Now let&rsquo;s verify if there is already a newer version available.</p>
<p>We can just use the apt utility from the Ubuntu package manager to determine the installed versions of any of the installed software packages. Here on Ubuntu 18.04 the Docker Engine is installed through the <code>docker.io</code> package.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ apt list --installed | grep docker.io
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
docker.io/bionic-security,now 18.06.1-0ubuntu1.2~18.04.1 arm64 [installed,upgradable to: 18.09.2-0ubuntu1~18.04.1]
</code></pre>
<p>We can see the newer Docker version 18.09.2 is already available in the Ubuntu repository, so we&rsquo;re going to upgrade the software package <code>docker.io</code>.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ sudo apt-get --only-upgrade install docker.io
</code></pre>
<p>At the end we were able to upgrade the Docker Engine to the very latest version which is provided by the Ubuntu repository.</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ docker version
Client:
Version: 18.09.2
API version: 1.39
Go version: go1.10.4
Git commit: 6247962
Built: Tue Feb 26 23:51:35 2019
OS/Arch: linux/arm64
Experimental: false
Server:
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 6247962
Built: Wed Feb 13 00:24:14 2019
OS/Arch: linux/arm64
</code></pre>
<p>Finally we have upgraded the Docker Engine to version 18.09.2. Please keep in mind, this version of the Docker Engine is provided by the Ubuntu repository. Maybe there is even a newer version available directly from the Docker open source project.</p>
<h3 id="recommendation-install-official-docker-engine-ce">Recommendation: Install official Docker Engine CE</h3>
<p>For this step I can truely recommend to use the official Docker documentation for the Community Edition. For installing Docker Engine CE on Ubuntu you can directly follow the detailled steps at <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/">https://docs.docker.com/install/linux/docker-ce/ubuntu/</a>.</p>
<p>As you can see later, Docker Engine CE is also available for ARM 64bit on Ubuntu 18.04 LTS. It&rsquo;s not too complicated to set it up, just follow the installation steps in this documentation and within a few minutes you have installed the official Docker Engine!</p>
<p><strong>Step 1:</strong> Uninstall old versions</p>
<pre><code class="language-bash">$ sudo apt-get remove docker docker-engine docker.io containerd runc
</code></pre>
<p><strong>Step 2:</strong> Set up the repository</p>
<pre><code class="language-bash"># 1. Update the apt package index:
$ sudo apt-get update
# 2. Install packages to allow apt to use a repository over HTTPS:
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# 3. Add Docker’s official GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
pub rsa4096 2017-02-22 [SCEA]
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid [ unknown] Docker Release (CE deb) &lt;docker@docker.com&gt;
sub rsa4096 2017-02-22 [S]
# 4. Use the following command to set up the stable repository:
# here select the commands for &quot;arm64&quot;
$ sudo add-apt-repository \
&quot;deb [arch=arm64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable&quot;
</code></pre>
<pre><code class="language-bash"># Alternatively you can also select the &quot;edge&quot; channel for the very latest version
$ sudo add-apt-repository \
&quot;deb [arch=arm64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
edge&quot;
</code></pre>
<p><strong>Step 3:</strong> Install Docker CE</p>
<pre><code class="language-bash"># 1. Update the apt package index.
$ sudo apt-get update
# 2. Install the latest version of Docker CE and containerd
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
# 3. Add user &quot;pirate&quot; to the group &quot;docker&quot;, so we don't need sudo
$ sudo usermod -aG docker pirate
# Now logout and login again
</code></pre>
<p><strong>Step 4:</strong> Show the installed Docker Engine version</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ docker version
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:48:27 2019
OS/Arch: linux/arm64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:11:17 2019
OS/Arch: linux/arm64
Experimental: false
</code></pre>
<p><strong>Step 5:</strong> Run the <code>docker info</code> command</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.09.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.140-tegra
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 3.868GiB
Name: jetson-nano
ID: 4JER:EIWM:QFNF:6N2C:YUW3:YES2:RSP5:Z4D2:7PKI:YAOT:G5O7:5N25
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
</code></pre>
<p><strong>Step 6:</strong> Start first Docker container</p>
<pre><code class="language-bash">pirate@jetson-nano:~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
3b4173355427: Pull complete
Digest: sha256:92695bc579f31df7a63da6922075d0666e565ceccad16b59c3374d2cf4e8e50e
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the &quot;hello-world&quot; image from the Docker Hub.
(arm64v8)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
</code></pre>
<p><img src="https://blog.hypriot.com/images/nvidia-jetson-nano-docker-ce/jetson-desktop-docker-ce.jpg" alt="jetson-desktop-docker-ce.jpg" /></p>
<h3 id="conclusion">Conclusion</h3>
<p>As you could see, on the Jetson Nano DevKit there is already a version of the Docker Engine installed and it&rsquo;s maintained by the Ubuntu project. But this is not the latest version available and it will not get updated fast enough to include all important security fixes in time.</p>
<p>Therefore I&rsquo;d like to strongly recommend to use the Docker Engine CE from the official Docker project. It&rsquo;s well maintained and updated in time. All installation steps and options are also extremely well documented at <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/">https://docs.docker.com/install/linux/docker-ce/ubuntu/</a>.</p>
<p>And in case you need more help or having some technical questions about running Docker on an ARM 64bit system like the NVIDIA Jetson Nano, there is an <code>arm</code> Slack channel available for you at the <a href="https://dockercommunity.slack.com/messages/C2293P89Y">DockerCommunity Slack</a>.</p>
<h3 id="feedback-please">Feedback, please</h3>