From c0df43cef653f69abebbf0ade774adc372390618 Mon Sep 17 00:00:00 2001 From: Don Naro Date: Thu, 25 Jan 2024 11:54:54 +0000 Subject: [PATCH] Migrating more blog posts (#403) * issue #397 migrate blog posts --- .../screenshot_future-of-content-1.webp | Bin 0 -> 6250 bytes .../screenshot_future-of-content-2.png | Bin 0 -> 11114 bytes posts/archive/ansible-3-qa.md | 216 ++++++++ ...collection-for-playbook-creators-part-1.md | 202 ++++++++ ...ollection-for-playbook-creators-part-2.md} | 35 +- ...ted-with-aws-ansible-module-development.md | 460 ++++++++++++++++++ ...-and-using-collections-on-ansible-tower.md | 2 +- ...ent-into-a-dedicated-ansible-collection.md | 248 ++++++++++ ...d-hat-and-ibm-and-the-ansible-community.md | 52 ++ .../the-future-of-ansible-content-delivery.md | 88 ++++ 10 files changed, 1284 insertions(+), 19 deletions(-) create mode 100644 images/posts/archive/screenshot_future-of-content-1.webp create mode 100644 images/posts/archive/screenshot_future-of-content-2.png create mode 100644 posts/archive/ansible-3-qa.md create mode 100644 posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-1.md rename posts/archive/{getting-started-with-ansible.utils-collection-for-playbook-creators-part-2.md => getting-started-with-ansible-utils-collection-for-playbook-creators-part-2.md} (91%) create mode 100644 posts/archive/getting-started-with-aws-ansible-module-development.md create mode 100644 posts/archive/migrating-existing-content-into-a-dedicated-ansible-collection.md create mode 100644 posts/archive/red-hat-and-ibm-and-the-ansible-community.md create mode 100644 posts/archive/the-future-of-ansible-content-delivery.md diff --git a/images/posts/archive/screenshot_future-of-content-1.webp b/images/posts/archive/screenshot_future-of-content-1.webp new file mode 100644 index 0000000000000000000000000000000000000000..fc4ed0a13ae90540eb772524ca219f1c37c054ae GIT binary patch literal 6250 zcmZvgWmFWvyT<7dP`ad7Qpu%5x?yRMUQ!yRSy@85rFU5xq&p=AX;>PU5|EZg8YC|N zd(S=h%l$I%@60^&%$zfG-g(dHswyd2Q=_37DavUZYKv+ZJ~1yuHXe=b7eR1M@|&Gi z$si@EIXAB;g8(r2oinuq4lPbZNQ8Z!BL_7OZDtdMK}Af z%zH2E`104$MK&~Mv?5UJZ>FD?c20YvUIii<*D>oK3T;7Y2V6GxB zO$gtskF_nB?2^E)cO@xyb{lNuA)F}QS0*J98QQ}WE>l<)+ThXaw%YD=K~yXo zHz#5sRC>y`qRp9-$oAW1*ZndofrESBV#(6$D1*>3K7c4!wjyJ{F%Q1{-LB-@uXxH| zN#?@5G?NCX3!f17zafzMg}(Ty!ngbw?WYx0Yd_C=1ltZrj5@PVLHO3757Mq%`2Yj` z&*HF;$=xgG(D2Dj#8|qWPD$3T1)kcp)G=Y!yvHN^>%J^BS>&*siMM(#g|qUF3RKf* z7g%_QQ<+*c&TU@F8Lt2;!#wU10UOOFHOc!WqE zirpGV>F9za!SC$Z4UFyN(B50c7{ukKZu0bwAfc@=P>CCjAxP?YdKF@}lo2yeMp8cP& zE01`4_~3IyR6zZ53aS2p{&oqwMB^rpM|45O#8`)R;#ph;m*|2B?R&o!V(lHk8 z=F^{N4cU&$fHw{=QNiGdt6LI2m>8?}0qdx}^>(u>Wg01c@z_YVrlmCrMAb~d;(Z*V zna0py|Ewe=7z5V{nxNDmbX5R@DPdvVSl8-}+h%I$f&NfeO`jGF4jKWhGPhFx|!rMbg>FGt9TX z>m&6IB~&pQo_3)3&9o{(HXaVKN&8JXKffMTn!BfGE?)!O0H5KJP7B6Gu~YR;t5hH5 zJ~ee>d^T4=@tOLi9P%W`k5sO}->6H?y|75dv5aR+OrE;grC}ng3(e5cts|DO+&+#q z6YCdRzOeNnDM3hbvCi;odL$fcA5Af}4Hx2AZS#jW*tOrkFHp$&v|pf;AZ3@U6OUyf$qQE1LCKigk+2~v~WJD34Co1@J?6=K{lg?CzH63sCZ^M!)iIO;{jO0R$O$IL)O{f!qB z-1l+3Ik2sKLN+Hdw5WuRlYbcp8ghu8H51>z4%ixUY=JSIE&Rl@*2PejoG?_f}JGX_SuADrblz7uc zs-f|YbFniA)IK2%@6rutxsiq3CMl;s*HxXXeii#}`^bITb29d&`&xW)&lLtK{E1s} zYra>61J4B#Dz1_yNI1iDV7aqZB0NiCv@S~8#hqi9Hg{b^to6d%t0Bg4t4=#-Y@ek4 z>=XPj3>#d37gcUn*1x0+axM$UD_W@Z(WTHnd` z-Fsb4HPEMNhv3G5xHTTo{(4{ez{x<)E>iHfdFRAQJnLE?JI8z-8Bxr|c7>8?fgt+> zg=c%@cOsY~eaJ#c)$62Tb;=h3re}-Z8(7ZXv>~+tvl?f2KjiMWC2p|dA8E1krqG%L zUJ0p;4t}aY27{Tn87(gxRzP1}5ecXCl6<6tAHTGiM+tY&%QhG9mZlrT#s+c0^Fe=3 zy@gX54<-V3&do-(eD|0KJ7QP!o~NGqclZeURc~a4?!7NSUz9DcxlCvu3fD7h0=935 zG`9_f&rDOHhLH9Ep9vh!M52ykJ-DvR~ja^Lu+E1}qU?{04DzkVG_1V&6?dx+o-&NYpdRvVd2 z)3;lh>2wM^*(q?SIIc@QY;?atb(`^`^2=6oevR;jekt-|g}m^#kdCvRRnx8|ehUew z+Z*7TL}eBdZVMU4Hdj7NoZVY&{1V#sFe~C5`&dTsMt6f!^JFTcL;tf-9!?tLFT_` zalfqlFspX>+=SgoL>z>N*3?Z(U+y7`CGd0;JE~9T;IC=DXTOcsgVAuVH4C^)+G!=L z@$udwb0Q7FWG^JdsDV*3~c~d7D z#qQImKV|*L-M8A87ZS|EW}MN|RNLP(VfrnNZda|97TK|DHElA{?Mc5=+kcxX?TEU5 zpWH^Sh|x5Zo(}Qda(IX5o8j5!IqazQLd26T=)h^dI>P=G`yZ69hQUzo%&AC%scNn_qY~95<8g|D3eV%(*Gl5aGTXOKUUaH8wa$-)m&g;Cg~VsO(!AVgN7pJ;GY1i1DsvK!Ss#)|yLL zZ`9~?prWMwZ={hO%LZ(5nR2|CfAC|6GG)Mx-dze)7i8ABqT)E{TR%0NiE+{M#-Pv@ zqje=OVq{48pmDQH`0pQMD@$&4(M&#NB5?v^!;H?Xe88YH<-)gBJE?NWdF(!4%&g_R zSvs9;5j|K7J(1vv8bw99BEY1h(}kI5Wl1a!7gClCgD`=pMj^4ezejT$x-+TdKQ@2j z!S)@eTkXl3A0Mv|1n**f9=t}q`i|JXZ+lGAoCevWLJTZ3O{ zJxUXv$!w}rN&CTl6nR$r&i&;`eN8b?!jCdw#inh4q>f8mdwP|fx>}{U z6}Ggh=qTb*HEbiZ6=}-?JCzqF!YBw?!a=9kWWZ#RKBv>zBujUn-}R>@{;|o=aVlTa zqmVg$>d@;E@CablUgT z)sD;ChbyHa=w}&X@|*k}FpE&=xTRMrS*{F0%3$8mi%uM}GS^IK%f-!1Uih$$_GaXO zLZMENf)hgFC}HRax;0D^M={Vrqvj=rMp5%{8B`{enk`W_iIG-9pD$@~do2(~$Cen_ zq!wvVy7XPIce9R`kI(|~)c5nV(<6u|MftJRg)hkjtCU4VDtwLSz9S=gLg6W59UBKS{3jGidO|-k>?ag` z!i_!go;6N=jv2tR1i=MK3skPy-`K_2f~ab4gb_JgO1d}_ARXNBfi6FPB z-Ghzmu&u5)eRWCrDDQQb%?ttC77-W~1)k|mJ;MdL82Nr(?P`*HUp%;Pc0-*2fU2lM zQ~gxNBTeq)x)q=GY#A;rgBA0&>+gRKywtynICju&p(FIEd>0~)IV3^U_v3!ZXojt& zD^?y(!A=#xQ!33^B&i{J-YSz2!nE;HBO(!JwN&>h5gqE;>Nbt6)gMG)mEv4u9Lh&<#JtDexvaA<7vQnA z*;cH@@2J`zsgu%n4r!qH2u9?IBW)hQ-`EdbK~vJDggsIDAMd=3UeelZP`2j$v^fob ztK9eeCeCp~0*&}BJc6k;Ycf?b$R%YI{-WQwcYd#Yskrsv)j-oG&mFe=koI-5c2qJ< z?9lAtIU%IKw4choc{`cmPK>45QzDzFlr3p(<#3C(MCJS8`;s*4cDUOw08~JPVo2Ad zy7%9`;>s|kWhhW#^7*u^pkw@A~O|=|Y*MOw%4&#Wn-P8GP5aY`b?o3LfkZ*mF8uy;J zvjDY3;cDKCHs1X@qTGOMhs@lE&h3QES-0k@R-pUG;$1=Ezhs=GGIWioWEHcGYs@|} zx^VR{>b-#J%6vRY3qF%_O@{Cdz8Uceu$ceA$_9#SEYZT|y3SfIwePt-1);OII%lPW z@H46betq1K5{J?96Wk}S9I>TvwVrICJMC=jMTg2ay_=qXvyX|P4-RjM_?Xoo#q{me z5SjOhbA^ZQgH;RGAXF?HLAk+f7Ts3dh{x-ov`-a%SgR)FQNN` z|Nf~W_;cw0-HWUS_jVDdAXys0u#H*M8xX3$p?8cG!HXZpfgd4^@5x@ebKBj*7;rXo zziFWS-yT*Kk?dTxuS-1gxX90k1#?E2uMua_mL(ofhrdnC+9T!|vbW3NQm$HJ;;6EK ziDxOW39nL+VhWw-y7~o3r;cf`RW2{AM9RvY|IwCusf_uM0dh#LkqOi2umd>iUPocs*K7>VXb4w%N^SX5$yI@SW!vB$_V z8&zdHqDdUl$z1`dU4ntNXQf7x(IoOIPR7eLc!4j}ENLfTLE!0(guoMl7Ol+i=?q~s zb!@2()i~;VOkNohA===tpL71cQi==gAWV&-YXI{pWi~X5^8|+UYU(TC)E+#Z@cB|lvEEy1R@(mhxi7EgR zUG*L-o=&PvLfHoeN$z?X5`E^zCb@8Qk?-h}RHHtwL!ObS)>wNte+A^A2-9RZM!N_o z7L*)gOSC;0FuyF3{uj#te)_Wkz0|tF{!Yc`FX)YzqAVxiu>WmYxltGVP}77$@7^Sl zp7)<+=O}~#ogm~Cp#_m>8VrBqNGQ5Dr_g?ECVjxvLSswB0jDSz=nDJyp&=wmDf%}_ z2BCDeg8r|#s8a^VHKfn7XFM}NQeM)Zs5Kdehj>^Z12g7%JwRcFDr&@5Qx0zc$`@CY z;q}jK7NwNIwcoaxqjTo*!dRvgVu9k2Qrw79`Od_Bf5G`wJ(IJz>iYegf+^pNDg)PLr$ zf)nPk=ZBJXKg%al7lvV~4HVWztw+b6nIBFj&Qzl>^ON-ZY#O*Qa8OF)K=pOh@*>+lHC`3h-!G+Sz{#PwWX-@ zA(0h8`I!eJE)6B?w>t`H2qgziBDx#OO%RhtESX4UqHIBQASu1;9CsozVEPRv>jppl zpVKlH&as&Er0_TuB$+39h2tr;h^1+t8paf>cyKgxY?v_h|KONnAjjF=96EMTz7W9t<7k~$Rkbk&o# zH&z?(c2@};rHCr6!rFisWd*2+3j;&h)(EvPs2M`JWmOenx7~O7W*`=oeSEYE34HvT zp}S4P@rg??(gg*Pli365lFHx2yDh49+96G}h*g$VRYlD;+(wYKs2SpdqGXxOr+ncb zwkWkm0-!FZqI+Igyp1zvC|W`jbehggB0J_>V_1R&BgC=PEEij{?DKRbBH^x!L3W22%+zpMT)b0#hq(-Y3ndt0FF zUBr9n3<3roS@X=;y!+qbR`SH(A$haKV|EZi*Kz>VQQ9*TUHvcK7Pk$^mG;TV;Xg{; z*Ctf@Pd11>6-)6+mtUvlbSwJGx6?fsdc}jp>;cmmfj%V1Mr*`&5V{!0pP60f&1I&V zd%e8OpSI_b$hBNCpHi|1xQ*Ty9_?3TnsA#)wy9Jx9|zL^L%B<29UNZB>3J|2b^x~l z0+n^Q=hyHTfP?n-2}Q^{)u1{^kq8WazKo}ltIa|p44%ZQDt7$xTJ_8O^3h+UVrlH{>)-SDA~i1G z*u&txg0XNs7L5j2zI?^V8~Z;p2KM~&xaIt@G;-Q$#>mN#a-?;{=^^=p=d(u|8e*LP E0&2htrvLx| literal 0 HcmV?d00001 diff --git a/images/posts/archive/screenshot_future-of-content-2.png b/images/posts/archive/screenshot_future-of-content-2.png new file mode 100644 index 0000000000000000000000000000000000000000..f94d8e4099d2daad8064bd447f6c25db8fa8abe8 GIT binary patch literal 11114 zcmY*I@PDEMnzge!k7U9LR0*kl7zB#icRCrEf`2=Nq#5R^31^)CJR=KAR6D%9jYi<=~Hq5ep|PuAJOwYS^TA)gVe~CxZTp6=XIj z9be>7UZtV(lc&A_Iqp%uA=36J&h)7iy#<5r>{f~5KBeat zw3$Q%s^LvbqJ}e@^43OI%;UnUV`Z|~wRdUbW~rbC<%>szP@b6q&1c&v-8ClHX_Q0b zvf!Np#*;WEp_K+Bz^b^{g07zGn1~sS*Ein)P%~zn_)umP@u%ZmZK3K_`U=E$i;N+N zheiYapuY?W4sMs{rhi*@nB3Y<1RdzvUxX zNa_Jb#h>r3GkM**@_`MKn7+fam6*-b?dCmMS}2)Hm^%euR)??Q<8Z%v0YMUV)jfIr z^S@k$MPtgH(k+SvqO$r5y+=q)ect!Ta)Z{vz-W5m%8ss9) zF^VQ2rv6BMr!PH)%hv(T);5P#H4rU8h!$6J?xJaGs*9Q>z4rBer$b|&mMB@0;%34W zp4bH*A!V+WutDo+ZIZ)XKS?TBzTw==KlXD$B(J4*^JeWiQBZ5-eX18549*k3dB#Du zCJtUfQDk4xG%e~E5aQHv>y29lqnMh^2ItoskR1nv9zpL3C2}P6D*Dc2+9aMcjJ^Dp zIa}JRLc~WIv48%g+3u@elXzgo@oJxYI(q)N3vg!5dh0E=_*=`|B>uRRR!40^Gp!DT zcC-+&W?6C4&AWJ@b~BM3?LG-nU*5%gxZs)aBz>sOQXFS84LC3ujBnsh<&wv;Fj{tG zIyP^uwC`P5EY?+yPIdkEwbaUmx7wPa%QbF%lBp_vFUy%#x0N}%{mOHm`A5}?o^GvQ z{o!TNJ5ok-PL??#G&K;Lu1VWl*4aiu{N zGuz=bGaaDL%_=T4Hq!c*ba5CZk}=ah>0Vs`w)`3@ZuZvk>xOk1BmX41ALU_E63VZv zNhe2NKb2~FK!Gc{pp(kNf}EM^@$GnS&Znm|R2zp8&lTX5cfa^vvyL{~`}O97jksyR zeX9eg>fM?V%lzOKOU1}K5UHNT!Pvzwv4Y+Uf0+-R7!45Wd~r2$bnv&;?_}v*`gQa z_@L+@XT#wR5ezz=G&?pNS6O@*8++nM|0?Q9&`8RN3GW``1DooeL{x+1cjx4UrKcX_ zQ2qWQ)2QGwI2UAl_&AmzQnajo?=iM)lzu`XP4lH*^&gXVthtb&5Qit^15CMzIrsWe+ zk1rxbpHeT@Np|iV{Xq+9=4!4El}pt7fy8;P1^l{yc3Wsi7w*RR-aMjjCMFED+vnzX z#paX?(X^)xJ+jL3mfNQzotb_vr8HXi&T?}*#rx_A&5khu$^cURC>r$9K!rOu%{a`! zg@`Lh+4c4Lvk<9EY*!usk7tgL#6dZm$*{UcGseSJ2A#r{vr{6BRdJ?^-yIwY>|d~3 zXL5@D?GhmE&&NW0?H*ob$#J}25Lv38`?x>6u}_8(Y362edDI2}6jlJf=DE&r$PW0k zy*mTWqZbj-R3=i?EO-{alhohm%jy?N@^DV*uYX@*50~PK{E)qBK8jFg_)Liub&Fux zOa?H9uLNWEce9@xtiZn&mH^iDPW7HK>N0OoslIz*B0s0P?cOf|68UvF_Z_U-uHu&Zv>S`Ig*c4V*;v)KQ~*k5zm zNq?-@Jz-UgAH2b_TRU}fYvebFE$hPzW3eyGRsvpS34Q=IzdI=3vDW|nvSQD*)L9pI zQ#)?hKP-v-&~-ZiMAwCG)uli5twOwEw(H|8U^XaUu|vIjI%XG=ZtAK$5c9g%U7grP zzi?NaeJ@6#n@Xm5%Mf3HOQNj#e<@$@axn#!V7;gVUxReOP-O_j9(A zlLVcuuuItil|*#(!ycd7xac|mPSBce#AnPckEVJ?cdU4^rV?g#X zBeCW%rOT)3zk=9nAIfMYE3`jLzofQhfX~*US6rVfZ{c|1-BD<(%`4-J?`c~$cFipB z?MEF4(JxG}1GL&SjyiX<*R2;ol7o3KX4O{5Os2w{KKVg@IAJ^KB>Z$^zGbiieFq(k zi8y6HPR9XQ>Et-LqSF+533*KtMu#ByF-MjaLjpeHjRd(i^pf>G|f^Oj6XcrzEWXe{HIDzWq}ZP>?Y zT{Sj=|W7g4OJ z`mQ#w>_}ZGAA0be=gFQ%C@0uu`a7FFOXN*u zW$3S{)w>SH^BKMt@%G=PfP*IXN#zN0+9{a8XOu;AwZ%=@uOU~00ESRnoPLai}@^PfaAd6J)_(jR(V zBoYXHer`Z;hlus;OTIjix9jBO#RC#RQ~f9?j=58>WsK$d=f40qtl6M`hsQI6+<{U0 z$ngA0CH%4>59|C~ieEt%jXwLCd|?9nzl!9K5p?-i(H3AVKbg{(GvY+BlzbhfA>R*= zX(xCFMua1SD;b664`szwa&mHd{ze5%bmex8JuzoAGbkm~ilucb9aiH(5Z{byRnL9< z$rnxaE;W409<7uYtZ}j%P0e2(M?!bmsVBeH$wy@_N_$??$V)y z17(}8o{?c|53eOo9m9aUA+7qC!8IP(XMpQ>eguz^;$T-ueCRa9*Fa(Pfz*-Nj~LOg zMSUN8lNE70+wTt*2Q#o@RXZrCKn7uvLYv0bo{vF~_Ef^w?x514w2JlpDOG!E-{mmr zjXV)1I4}g&a+~5|R_nJ~<%~(iI;fchkdlVPK-_2iS?e{uI_}^v9+A-qtvY0ha5+xH zNGM@bC5gh2o@!(UcLWU0dMtl@nP4IVZ%?tAvFkf<>nmuadw-}sbcoEemQ@bS^#2I#)~;f)el+( z=Gh_&>ICfrXRr#*aMYBzFogoxiD(tA0V7 zeMh+P_7flX?z*mfsRN8g5!9(=1xH4gVn?1*lDKdFCFFIoSy!#1>r!J!tThv)**s2_ z@QF_VkrtR-elT{D)eVeCG~>YA-;h#g&w@pVL{Wh-fzYjY?m5Zupzkb_xqg7i8K zzqR-5JwpB59C_0IJpUFyr#tSM;g8ze-^uT{F#ROGWu?gg+w=xS%eF98H7s?PY=g)$ zAOF%(>+`ax$*fD0_|>3{l}JI=*Zq1w(%F9bx~*0>MPZvi=3K4%Xv?w%=IIxWpk#k* zL2%0&2atwlZYHfEumedfE?heUddC9vuq{j0oNTqE{vb1_ekt&!mT;xYaEX0{UkFiVZjc5# zvM=MD3#A=p-)u+u4-x5j$9S1|!evu@%`UCvX4la- zeSmb6Ixl&Xq3LH)kJF-ZtgJv<+l_~@L>S&3)ob#%oV+zyvsz{#TuJC93aZRY3$e$U zsQrN;1eoW28FbM0Aq?hzF72+nQg19`9C5M;%?fv0=X%cU_jOOze8X@9|9l&ofZf5P z@&dAW#DPjl`Z9rTAhMbL0Ce9muVt41WBN+7D|AfWgDtemB1u8E6FcunN&FOf{rqZr z^TmEnv*M@drzb@6d2V#?il4!8J}@=aIHj#mlQq>pmLV*K=41Lw;g_P(ZC;9~1_H|$`K9CR*?+EhJh9&`R5B9f879}Yff?;AY#W!m3 z&9UF+P61IqFv?=HHTbAM_87=7v{1r$Zm`hZ!A=l9McxX7JqY&^_afEkwfZxp`Je0m zxA4n>EFdv)19K_5PeVFS8Q(;WK!$E5>1Ap}`XQA;B@MD7wMMCE1ivKUEol|(KspyJ zwsLXuGL;g)(;{h0$PTMmE8I!tu_)&+cK#++QSMWA3w}H<(jX`RoiI$>nAn|rjkmKH zLh~W869-Vwjjg+rtX=9&!+I`7n3^0U%|o3hn&Wq}oMDlVseanHZ!~A=^5|}5IS+q2 z%yOm>pq3CEPt0B>n4Jc=1f2^9ohdDm;I3HzwRoRq_d)N2;45_}G&km5tDMwL;6_$M zLU(c`v&2Rvpd~j)&Tf~be>#cuA7ea^pyg`em=5|Ro8G(=y*Mq#wP0N&*F%j8`X-b) z+e)czzWh}*h4g@`ID*6+)~kgm82D2J$Y#IJklOOnt~j)5eOz8J@jCl;)q2CF%yZWz zY10k9KRAf6H+mz6>>qhea_y0cSlAys)WrHhweh?L&gFb1J@)VO@rjuFL1#z5g&dEz zeazIMS%W5yi2}xs*JCwNpaJ52$oP8Jks=5ak4UXnZKDx;!B7tq;_d6aEeYGuHUO)k z;Lh{Kmhy9&rf=^ZyMU6_8kKn#DRuMNb}`oSB{u1E7$;%Dz|+QASm8i4l zQApCjn%y?!!nK)6dP_ec>ySsvGY3DY3_s&{nIKGD2ezxOKifZzl+ZMp2tBW5kK)r_ zLE9A0#=}Kyuj%P>=C#dCDDHjjQb`g~7bR84qX5*UdThNLl+a%+XPg^!d@p`(dOs#& zawwKVW)_O_4%=+_Gtx=DD*`#la<-1Lm?-&v5Lx=9nErVPX7lVs`h9!iel-zF3y#I# zNO--vza~Y<9CJ-LmTDdC(PPv3xD=(lAz|Pcn`AIEr1@d0i&BcNYv77l+ZEjxtg9`{!#_f(8!@ zBLQ(BI%Xaq2Tj6l zDTUKcaF(>w?-BKsxiew;l1akn+?wy*Zf*6oBRT0*+vv`94E}@LNLVeiK2-O&cSA3BsLCb33^CiLhg7vc2Em zHixB+p-jla4C@V7p)7)R!93a+=jl`M>T7|bG~ZQB|0mCM#G&mMWm#!moM%q8b-8NgEm*1dI=R0oT8~Z(I_g+0gOH|K0gtF6N3z{_v?DL zSF^$S8A+gTWCb49-K+FI+{?RJy*@9IH!EfCa~ub(U6>W)@wgM~-l@sEqHnq88owaR ztfy2`o;dic7}!&_>pgw75H&GZ*-Zs#yKo2EUtr;e8`{*CtCf9$+J;RwrlTG76{za@ zAbkBH=vA_=&x|EqzJY++TY29skoQu^ z3wd{=vLj!>_mn5jrtkW(yVzrnm%O`XB__kA>fSu1mqFAwn|CT7KM!d1--?C2L0W=} z#XrpsI-4IH{kQ2$T#)WJ`P;6I(n9#(S|# z2n%!wA#J-8RNlDlf2W+&FR8l0b*X*S@;47jcHPi5gH64Fu! zpTW@_CGk%bU>=UC5YCwaHiiG)Ujc;_9iE>ZGl14;`CF&k+Mmdcd$ z+n@6{-t`BZHXrWM+y-O$M$(4{I;#t-0AG#}7R=@D4%oV$YF@YGSPV5eS;|KU!%t0s z8>fR$TopW8IOrdr#g)newBfAL-kRgA}K-XiQXUf zaTw72+DZ0BeVM+Ym_9B<>V!O!Mj|DNFzmOQ8EEMjPynN+A$(^KeCqRdgO^orkkyt0 zI{n{i?B5Y1-A(Ic#VuW)tEq+#vrtU9b5jqApE6);7*;w&6cH%|>=63#r?tgZ@f}CtJ z!0pqA>LQ3~Yg9;ShbP5Vpm*|KWg_ncFAlLi>%FVPxh*Du=x)zpx<@+_1p6q0LmJxQJNa!)|STpme zfmB8E-P4M0R4IkYVxp54qnu2I6F+t{wk3t~RJj#IHKx~WeioY0$mE7^9;Gh9H)P|B zU;pjyoGAF0P~GT`$|M7#<&yl;KpDEolYy96X$$tjFx<=<|;-obJjGGoGDj|jy5DNftdN^l#0jUWeXA{`KweMJ-sJ(69{?n;}d)H z$V)?8i6$ACSEA&G6_04hzQrAD5(c6$IFSdJKi~P)Fz7Gl z7sNIVNjQKdH-7LhDIi|~3Iw`B$v$;3#|~ukkg(C8h&3~9CDJTx$JTp~^Y3=|=jQ#n zX7of}i#fF=;^PpGalvWB>VATBeqE*B2X@mZLGi7&rt}|>{w{{CMwbU|elM$0P&-mY z)lW7DG>gx?#%Rg-Am6mVAN9t_w9k-&d@da?OJjsjBU~{0!P6oE=djiwJ39(0b)sq5 z0N(ontUFz$J-!3A@2Oh` zs%*G}W#K&y=W5#)wp5-Zjqym zQzf|Aj{%j?YTK>G^s`X(Pi9hupvo!jolwLsFAwIVqgWg(?B^IRK9<=%hSkp=<3vQr ztycB(3sQc31&qphdE$a#m6ans^Lxy>stl!E|C(6)>fb5+)XpBVdKEKqi3^Tiq~m#+ zz?IjBx;i^Z!{_0Jm9!?;Wy1T*ch5ZfqDiWXggoS$1h^Z~fYE1{gd4QiwyUqFNbH zUa3Dlm>_S@R&YxvjxjkJ!hsisdm*Kt$LoT|bX4b;4Hw$utA8_Vuop^71OJbOe8$C?Dv6+Airtn(_$ zYKUI&PzTNi3ROEJHoiK8PfhO9W&R)dIzKT^mx5n$dH{%P*yLH$+ieDJ!1`xz?kK40 z?jx2@;ov|UzBw(TqnyG1fBS(Q%gBt@9d8H;R!M%ovA(zzk&GyVQ8n9H21@6-4(GQe zX7k7%*7DxJXcNkH9mDXWQ4P7%D+IAGhQ-5O&k7~$3qg*;$FAQBMGp~uNqUk%oVB!9 z;JW~UOMAF=jg%Hd)6pX0l00}r@+Gt3aC_uQh?&PwBYIrBw-yW5nz}U8c~fxmsKil_ zGo{_&dh{3^2F(kVfK$>roh|ClTI-3j>*iMB#&joJ)SQV(e5vQgWbQ&_ zL#nmH`&ZXroYGur(;}iSo0d6(>%>+?x5| z9rPiMlEwGgB_bk$-zz73iXMB^t4d3Rls7Z}_wkLATe?rksWZR*|27t}#&98wVllzy z{%oc2-pst4?uGzXDxd^aGLakjF9h`*Y~{@;k*2QGrQ{dc4u@jMW$CV;9(D@klIV_!%@tY2Di~N)9Sx1d+9B5spzotfzk4LzEKK&m1ZT5`2 zOF{Z(`=%q^%%cJcYnZm)X0O~R#tK(EI4E4ouP^ovLY}3q5LVYd_-~*CKY0P=lM>D?4RezEHP`!{fO9X8asjRM*EZcc(yw!e!o)26 zJ-xh#-^6cjIwOQ6g!Yy`%Qx6nG6-NlYwc2jU;ZcOP0x$M(Sxz^R@NLQ7j6BTtFSW* z+QOe9FBo5B#D~#I(3v^>S_b!J*WWfGOLbSm7##UQo#`_j_&o{%)Mw)-#?)le6vUg& z_E(Mia0Fh10Eh?Fb2^;nN|Y-TmHymNKchX$+{haiI*=UWS$kKZ_qcO?jOBkgNL*`n zFqS<&8aSEgu^c;iJc}rL_}7SPtvaTg>*@A3fZe;c$zj#9svlgHS&_B)g>D@eFWW6z zQ0H{1_uGi(p{&Yj0$q^5eshANn$da^Bm^)ITe?fpCn0u-^0om}twVN|c`R1%tH_MMELUq$qMCoY%J;+GD!{+fSWTP@g|dd;Xz3;Vj!6E-rW-reJs84YqL| z?qIs#Qt@Qy^AMEN`ezte0uG@PopHm9Xxprfn3_p@=_w-Cd^73k!@Q(K*5Y2|1g4pY zaq-K0Ut%4i9#fi}l`M%V-CKe1ZL+zkJ;ljXs(Z>yXfx zGTjy_c^Xl3(+j74o192<&6$NUvCUm;(6|2=;-_NtN$fm#DaB?=GQKVX$!D|jGZJq7 z)F)~8a^a7et}uoGzUdj`0BT!szA*hW3Upp)DMm+w%+80R7}^kNC*YHrYwc4@zWhpU z*01`|SUt6w3GivEMT&yVfOpiuiCY{6oBd0(vYBuT%v$%`4f%N~BZ2+c#nF7hQ3pzo zQ`aKI9H%gRif)~13fHS-{yJPpuz++SrEB^9@K+0`Qe*=AsEjj4jrx-R|3Q))BX`Tf z@cWHO$~_iPCOkYj5Iz5#2#^BH?%c5u`1Qt=#bAJ^zTNY>T2lbU8cekp6-1s``nPJnKIZ{HrB_!=H#f?R>Zhm$KS-7 zhL2M`pr~0_Ma501QJo+fVi59UV)EO(KmFql+&E~YC?jd0j@jDMd?IMwh*o8He;t2T z<^cGU;alzN$Kq{Dw?gzYBi__X$#5S1@N1E}1P>;?%_R)@78QJIiQ-JVg&6R-93k@K z&0=qLP~bIcm8o?uou$ZLK~vC_6Hd|V=Ff9e&UORoPc)uYXDYhE1Y#dySj)xno+B3U z2GvvPWuS|UPFOb;CbBKIpMdAfLmjdQ@HS^ylZh({&c%$S@$^k3;KXjQZEtMlowY-V zHwI$=hz80?t83U~=dGC=C;zasx5luaU};zt3t?3mlxt)XvPJKuCzDzytnsx@B}q+F zIMVob6Qn0n_gje8rQ#t)yqqr_H8^9I{~XFSaNR|i2%^iLj}%p63epwAE%W*)RSl)( zOhK5NP4GuJ@3tzq(R}!>Yd1(4X%ArB!4;8FzrU*Xa5YYL-v^Ax3Mntr7#lW5z?&4s zrbFK{9c6DPZ>e%yl>I(cBu>wDK}%xbUH$H)Ka07x-S}2ti5Z)WGd6UuD_!)coa^WD zRyi|y&E?{P8$fPi$W5dKXH%ZJ8?=uQe8hn}%oM%-sPqKN)!<#gKcpG42v~r`3`=We zDOeSbK+cc#TTD5KudKU^mKRnLYoE4kMNj+S>Y%i*m$-Vd*|w64x> zvwx4)E??@&{Vs}sw{6)NsD!|tRqaBaQ3Fo$x@$5u7`51fsGyc%dFgEG#ilUBmY_mN z`yMdRO`=cpIRD`7y22|Gu^$l;HiU0~#>to)hY52YRA|=LeA4!8*_x1Cg;nl>_oJSzd57&5v#>%x2Xlx<*Z#VD&j*t8MP(N3iE^l^cM zj^II!cK;i4c$~7;i(xdTI*7a#!f5P4MG5BjS!tF2aDuon$3Sj*WQ^c@RrB_N5%pU!vDE*mR$Vuxu^Ev1QvcmAg$`&j=hDb8B6 z9wDmkJ^5V;hW}Bcegjkg7Z;d16Ez%Qt!D_$sEE5{N8Xdh7jW-u0eRa;cF>tQFm075 zP8bZ-hLDgbdz7T7cAKpP14}&h$dE5@$AGyeij)l6@pL;0^Rhj! zl3a>|i~h?V_)kQz5dURP<=i-)2;3Z`SZTLdNR5|Yh}RC_Is2Oep|SIA5|t(TpqIK` zK`w7Wj%RmcCQVOTX>58QEx0Inu?nTHi~^XDY1neZ^v*^A3N(f~#E#Dpt2Y6&a9Y3M zvJw;jP0*c#e=lWuW05MgOFzLQhf6@c@$!#Ocp{OLdv-}#AVPRpl`yb^`(RXo`^g20W zKFoQtwzwYKS7S<1bMfGRMub5e!Fq0f%nR{BzN!+eRT6dQlA!1s1DMxbP7}MXumQ+f zoSD$wTFzFLBCSQROf@1~p}icUj4It!v>!8a4|2am?Bf;h=>5jbGJV&-uD_%aUn!1v z=wx6>X&KC6MytD zpO|JUPms0cyXqm!$j2@xgnUUw#V#S(iaqOY{>%xy3?0*bl)FSeJVY1em@Mns-USF98_mu!^mq$=q;H8IOZlR>sZ%ePik1e#d(a}G; z=g87RP>kkUi1oQik7a;JodXL6-{!almSf&WTj{-cJd5-2$X^4I1r|ksdq_+yhNQ4pAj&{90%m-8B-iBhzE2b5H!^-MCnhg z^3RguHimppUF`f5?(wohmpV7~pMT-zW#XS%rt8`TyNA7+`_pRQ3E#t8>g1N-+gKj@ zy1EV&`lWKc>jYI44}Lg!aVyiLjDdwUu%fu?(LI~Ujywv;!uQ*tR48=K0Z%xeT1A-J zHYBz}o0oF=2.10.6,<2.11* so that the appropriate core package gets installed automatically. +For Ansible 4.0.0, this dependency will shift to *ansible-core>=2.11,<2.12* instead. + +ansible-base 2.10.x (as well as ansible-core in the near future) will continue to be available as a standalone package for users that prefer installing only the Collections they need. + +**How is the range of included Collection versions established?** + +The release build tooling queries the latest version of included Collections and determines the upper-limit based on that version. + +For example, if a collection's version is 1.5, the range would be *>=1.5,<2.0*. +If the collection's version is 2.3, the range would be *>=2.3,<3.0*. + +The general idea is to keep Collections within a single major version throughout the lifecycle of a single Ansible package major version. + +**What version will ansible --version return?** + +`ansible --version` will return the version of ansible-base, not the version of the Ansible package, because ansible-base is the one providing the ansible command. + +### Installing and upgrading + +**How can I install Ansible 3.0.0?** + +The Ansible 3.0.0 Community package is [released to PyPI](https://pypi.org/project/ansible/) and can be installed +with `pip install ansible==3.0.0`. + +**Can I upgrade to Ansible 3.0.0 from previous versions of Ansible? If so which ones?** + +- To upgrade to Ansible-3.0 from Ansible-2.10: `pip install --upgrade ansible`. +- To upgrade to Ansible-3.0 from Ansible-2.9 or earlier: `pip uninstall ansible`; `pip install ansible`. + This is due to a limitation in pip. + +Yes, but the command to upgrade is different depending on the version you have now. + +Ansible 3.0.0 is based on ansible-base 2.10, so playbook syntax remains +the same between Ansible-2.10 and Ansible-3.0. However, there may be +incompatibilities in some modules and plugins as Ansible-3.0.0 allows +backwards-incompatible changes in Collections. + +**Will I be able to upgrade to Ansible 4.0.0 from Ansible 3.0.0?** + +Yes, but you will have to uninstall and reinstall again, due to +the renaming of ansible-base to ansible-core: `pip uninstall ansible`; `pip install ansible`. + +Ansible 4.0.0 will be based on +ansible-core 2.11, so playbook syntax in Ansible 4.0.0 may +include backwards incompatible changes (ansible-core does not +use semantic versioning, so updates to the minor version can +contain backwards incompatible changes).  When Ansible 4.0.0 is +ready to start its pre-release cycle, porting guides will be +available to help guide you through those +changes. + +### Release cadence and scope + +**What is the release cadence moving forward?** + +Minor version releases of the Ansible package (such as 3.1.0, +3.2.0) are planned for every three weeks.  These releases will +include new backwards-compatible features, modules and plugins +as well as bug fixes. + +Major version releases of the Ansible package (such as 4.0.0, +5.0.0) will happen after new releases of ansible-core. The +Ansible 4.0.0 release is planned for May 2021, soon after the +release of ansible-core 2.11 in April. After 4.0.0, a six month +release cycle for major versions will become the normal cadence, +with 5.0.0 releasing in November, trailing the planned 2.12 +release of ansible-core. + +**How much change will each minor and major version of Ansible contain?** + +Each minor release of the Ansible community package will accept +only backwards-compatible changes in included Collections. +Collections must also use semantic versioning, so the Collection +version numbers will reflect this rule. For example, if Ansible +3.0.0 releases with community.general 2.0.0, then all minor +releases of Ansible 3.x (such as Ansible 3.1.0 or Ansible 3.5.0) +would include a 2.x release of community.general (such as 2.8.0 +or 2.9.5). + +Each major release of the Ansible community package will accept +the latest released version of each included Collection and may +include the latest released version of ansible-core. Major +releases of the Ansible community package can contain breaking +changes in the modules and other plugins within the included +Collections and/or in core features. + +**What changes will each patch release contain, given the use of semantic versioning here?** + +Patch releases will be used only when bugs are discovered that +warrant a targeted fix for with a quick turnaround.  For +instance, if a packaging bug is discovered in our release of +3.1.0 that prevents Debian packages from being built, a 3.1.1 +release may occur the next day that fixes that issue. No new +features are allowed in patch releases. + +### Packaging + +**Will Ansible 3.0.0 be made available as an upstream RPM?** + +No. RPM-based Linux distros, such as [Fedora](https://src.fedoraproject.org/rpms/ansible), +have been creating superior RPM packages of Ansible for a while +now. So we decided for Ansible-2.10 and ansible-base-2.10, the +Ansible project would no longer provide pre-built RPMs. + +**Will Ansible 3.0.0 be available on Ubuntu Launchpad?** + +Yes. The Ansible Community Team is catching up to the changes +in how the Ansible content is packaged but plan to have releases +in the PPA soon.  The team is currently testing a new GitHub +action to build the debs for the PPA. + +### Terminology + +- *The ansible package* + +An all-in-one software package (Python, deb, rpm, etc) that +provides backwards compatibility with Ansible 2.9 by including +modules and plugins that have since been migrated to Ansible +Collections. + +The Ansible package depends on ansible-base (soon ansible-core). +So when you do pip install ansible, pip installs ansible-base +automatically. + +Ansible 3.0.0 contains more Collections thanks to the wider +Ansible community reviewing Collections against the community +checklist. This list of what's included can be found at +[ansible-build-data](https://github.com/ansible-community/ansible-build-data/tree/master/2.10). + +- *Collection* + +A packaging format for bundling and distributing Ansible +content: plugins, roles, modules, playbooks, documentation and +more. Can be released independent of other Collections or +ansible-base so features and bug +fixes can be made available sooner to users. + Installed from source repositories, from +[galaxy.ansible.com](https://galaxy.ansible.com/) via +`ansible-galaxy collection install ` or using a [requirements.yml file](https://galaxy.ansible.com/docs/using/installing.html#installing-multiple-roles-from-a-file). + +- *ansible-base* + +New for 2.10. The codebase that is now contained in +`github.com/ansible/ansible` for the Ansible 2.10 release. It +contains a minimal amount of modules and plugins and allows +other Collections to be installed. Similar to Ansible 2.9 though +without any content that has since moved into a Collection. + +Renamed to ansible-core in the devel branch of Ansible and will +be released under that name from version 2.11 onwards. + +- *Red Hat Ansible Automation Platform* + +The commercially available enterprise offering from Red Hat, +combining multiple Ansible focused projects, including +ansible-core, awx, galaxy_ng, Collections and various Red Hat +tools focused on an integrated Ansible user experience. diff --git a/posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-1.md b/posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-1.md new file mode 100644 index 00000000..d5511ad2 --- /dev/null +++ b/posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-1.md @@ -0,0 +1,202 @@ +--- +author: Ashwini Mhatre +date: 2022-01-24 00:00 UTC +description: The Ansible ansible.utils collection includes a variety of + plugins that aid in the management, manipulation and visibility of + data for the Ansible playbook developer. +lang: en-us +slug: getting-started-with-ansible.utils-collection-for-playbook-creators-part-1 +title: Getting Started with Ansible.utils Collection for Playbook Creators +--- + +# Part 1: The Ansible.utils Collection for Playbook Creators + +The Ansible `ansible.utils` collection includes a variety of plugins that +aid in the management, manipulation and visibility of data for the +Ansible playbook developer. The most common use case for this collection +is when you want to work with the complex data structures present in an +Ansible playbook, inventory, or returned from modules. See each plugin +[documentation](https://docs.ansible.com/ansible/latest/collections/ansible/utils/index.html) +page for detailed examples for how these utilities can be used in tasks. +In this two-part blog we will overview this collection in part one and +see an example use case of using the utils collection in detail in part +two. + +# Plugins inside ansible.utils + +Plugins are code which will augment ansible core functionality. This +code executes on control node.it and gives options and extensions for +the core features of Red Hat Ansible Automation Platform. This +`ansible.utils` plugin collection includes: + +- Filter plugins +- Lookup plugins +- Test plugins +- Modules + +## Filter plugins + +Filter plugins manipulate data. With the right filter you can extract a +particular value, transform data types and formats, perform mathematical +calculations, split and concatenate strings, insert dates and times, and +do much more. Ansible Automation Platform uses the [standard filters](https://jinja.palletsprojects.com/en/3.0.x/templates/#builtin-filters) +shipped with Jinja2 and adds some specialized filter plugins. You can +[create custom Ansible filters as plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-filter-plugins). +Please refer to the +[docs](https://docs.ansible.com/ansible/latest/plugins/filter.html) for +more information. + +The `ansible.utils` filter plugins include the following: + +- [ansible.utils.from_xml](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.from_xml_filter.rst) - Convert a given XML string to native python dictionary. +- [ansible.utils.get_path](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.get_path_filter.rst) - Retrieve the value in a variable using a path +- [ansible.utils.index_of](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.index_of_filter.rst) - Find the indices of items in a list matching some criteria +- [ansible.utils.param_list_compare](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.param_list_compare_filter.rst) - Generate the final param list combining/comparing base and provided parameters. +- [ansible.utils.to_paths](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.to_paths_filter.rst) - Flatten a complex object into a dictionary of paths and values +- [ansible.utils.to_xml](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.to_xml_filter.rst) - Convert given JSON string to XML +- [ansible.utils.usable_range](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.usable_range_filter.rst) - Expand the usable IP addresses +- [ansible.utils.validate](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.validate_filter.rst) - Validate data with provided criteria + +## Lookup plugins + +Lookup plugins are an Ansible-specific extension to the Jinja2 +templating language. You can use lookup plugins to access data from +outside sources (files, databases, key/value stores, APIs, and other +services) within your playbooks. Like all +[templating](https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html#playbooks-templating), +lookups execute and are evaluated on the Ansible Automation Platform +control machine. Ansible makes the data returned by a lookup plugin +available using the standard templating system. You can use lookup +plugins to load variables or templates with information from external +sources. You can also[create custom lookup plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-lookup-plugins). +Please refer to the [docs](https://docs.ansible.com/ansible/latest/plugins/lookup.html) for more information. + +The `ansible.utils` lookup plugins include: + +- [ansible.utils.get_path](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.get_path_lookup.rst) - Retrieve the value in a variable using a path +- [ansible.utils.index_of](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.index_of_lookup.rst) - Find the indices of items in a list matching some criteria +- [ansible.utils.to_paths](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.to_paths_lookup.rst) - Flatten a complex object into a dictionary of paths and values +- [ansible.utils.validate](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.validate_lookup.rst) - Validate data with provided criteria + +Note: In `ansible.utils` some plugins were +implemented as both filter and lookup plugins to give the playbook +developer flexibility depending on their use case. + +## Test plugins + +Test plugins evaluate template expressions and return a value of True or +False. With test plugins you can create +[conditionals](https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html#playbooks-conditionals) +to implement the logic of your tasks, blocks, plays, playbooks, and +roles. Ansible Automation Platform uses the standard tests shipped as +part of Jinja, and adds some specialized test plugins. Please refer to +the [docs](https://docs.ansible.com/ansible/latest/plugins/test.html) +for more information. + +The `ansible.utils` test plugins include: + +- [ansible.utils.in_any_network](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.in_any_network_test.rst) - Test if an IP or network falls in any network +- [ansible.utils.in_network](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.in_network_test.rst) - Test if IP address falls in the network +- [ansible.utils.in_one_network](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.in_one_network_test.rst) - Test if IP address belongs in any one of the networks in the list +- [ansible.utils.ip](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ip_test.rst) - Test if something in an IP address or network +- [ansible.utils.ip_address](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ip_address_test.rst) - Test if something in an IP address +- [ansible.utils.ipv4](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv4_test.rst) - Test if something is an IPv4 address or network +- [ansible.utils.ipv4_address](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv4_address_test.rst) - Test if something is an IPv4 address +- [ansible.utils.ipv4_hostmask](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv4_hostmask_test.rst) - Test if an address is a valid hostmask +- [ansible.utils.ipv4_netmask](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv4_netmask_test.rst) - Test if an address is a valid netmask +- [ansible.utils.ipv6](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv6_test.rst) - Test if something is an IPv6 address or network +- [ansible.utils.ipv6_address](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv6_address_test.rst) - Test if something is an IPv6 address +- [ansible.utils.ipv6_ipv4_mapped](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv6_ipv4_mapped_test.rst) - Test if something appears to be a mapped IPv6 to IPv4 mapped address +- [ansible.utils.ipv6_sixtofour](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv6_sixtofour_test.rst) - Test if something appears to be a 6to4 address +- [ansible.utils.ipv6_teredo](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.ipv6_teredo_test.rst) - Test if something appears to be an IPv6 teredo address +- [ansible.utils.loopback](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.loopback_test.rst) - Test if an IP address is a loopback +- [ansible.utils.mac](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.mac_test.rst) - Test if something appears to be a valid MAC address +- [ansible.utils.multicast](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.multicast_test.rst) - Test for a multicast IP address +- [ansible.utils.private](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.private_test.rst) - Test if an IP address is private +- [ansible.utils.public](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.public_test.rst) - Test if an IP address is public +- [ansible.utils.reserved](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.reserved_test.rst) - Test for a reserved IP address +- [ansible.utils.resolvable](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.resolvable_test.rst) - Test if an IP or name can be resolved via /etc/hosts or DNS +- [ansible.utils.subnet_of](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.subnet_of_test.rst) - Test if a network is a subnet of another network +- [ansible.utils.supernet_of](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.supernet_of_test.rst) - Test if a network is a supernet of another network +- [ansible.utils.unspecified](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.unspecified_test.rst) - Test for an unspecified IP address +- [ansible.utils.validate](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.validate_test.rst) - Validate data with provided criteria + +## Modules + +Modules are the main building blocks of Ansible playbooks. Although we +do not generally speak of "module plugins", a module is a type of +plugin. For a developer-focused description of the differences between +modules and other plugins, see +[Modules and plugins: what is the difference?](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#modules-vs-plugins). +Please refer to the [docs](https://docs.ansible.com/ansible/latest/plugins/module.html) for more information. + +The `ansible.utils` modules include: + +- [ansible.utils.cli_parse](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.cli_parse_module.rst) - Parse cli output or text using a variety of parsers +- [ansible.utils.fact_diff](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.fact_diff_module.rst) - Find the difference between currently set facts +- [ansible.utils.update_fact](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.update_fact_module.rst) - Update currently set facts +- [ansible.utils.validate](https://github.com/ansible-collections/ansible.utils/blob/main/docs/ansible.utils.validate_module.rst) - Validate data with provided criteria + +## Accessing and using the ansible.utils Collection + +To download the utils Collection, refer to Automation hub (fully +supported, requires a Red Hat Ansible Automation Platform subscription) +or Ansible Galaxy (upstream): + +- [Automation hub Collection](https://console.redhat.com/ansible/automation-hub/repo/published/ansible/utils): `ansible.utils` +- [Ansible Galaxy Collection](https://galaxy.ansible.com/ansible/utils): `ansible.utils` + +​​Ansible.utils is also available in the Supported Execution environment +along with its required python libraries. Please refer to +[docs for more details about Execution Environments.](https://docs.ansible.com/automation-controller/latest/html/userguide/execution_environments.html) + +## Different use cases of Utils + +As we know, `ansible.utils` has a variety of plugins and it has various +use cases. The following are the most common use cases of `ansible.utils`: + +- Validating business logic before pushing configurations using validate and test plugins +- Auditing architectural deposition and layouts using test plugins +- Managing complex data structure in ansible playbook using `get_path`, `to_path` plugins +- Conducting minor checks related to network address using test plugins +- Operational state assessment using cli_parse, validate plugins + +## Future scope + +Here are some additional `ansible.utils` capabilities that are on the +horizon: + +- **Ipaddr filter plugin supports:** + - The Ipaddr filter is designed to provide an interface to the [netaddr](https://pypi.org/project/netaddr/) Python package from within Ansible. + - It can operate on strings or lists of items, test various data to check if they are valid IP addresses, and manipulate the input data to extract requested information. + - `ipaddr()` works with both IPv4 and IPv6 addresses in various forms.  + - There are also additional functions available to manipulate IP subnets and MAC addresses. + - We are currently working on supporting the `ipaddr` filter as part of `ansible.utils` collection. + +- **Support of more number of validate engines in ansible.utils.validation plugin:** + - Currently the validate plugin is supporting only `ansible.utils.jsonschema` validation engines, but there is plan to add more validation engines. + +- **Support different filter plugins to manipulate input data:** + - Recursive plugins: `remove_keys`, `replace_keys`, `keep_keys` + +## Contributing to this collection + +This collection is intended for plugins that are not platform or +discipline specific. Simple plugin examples should be generic in nature. +More complex examples can include real world platform modules to +demonstrate the utility of the plugin in a playbook. + +We welcome community contributions to this collection. If you find +problems, please open an issue or create a PR against the +[ansible.utils collection repository](https://github.com/ansible-collections/ansible.utils). +See [Contributing to Ansible-maintained collections](https://docs.ansible.com/ansible/devel/community/contributing_maintained_collections.html#contributing-maintained-collections) +for complete details. +See the [Ansible Community Guide](https://docs.ansible.com/ansible/latest/community/index.html) +for details on contributing to Ansible. + +## Takeaways and next steps + +- `ansible.utils` plugins makes playbook writing experience simple and smooth +- Implementation of `ansible.utils` plugins is very fast as they executed locally +- Easy to understand, code, use, and integrate with other modules +- As its plugins ecosystem, it is so easy to add new plugins in `ansible.utils` diff --git a/posts/archive/getting-started-with-ansible.utils-collection-for-playbook-creators-part-2.md b/posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-2.md similarity index 91% rename from posts/archive/getting-started-with-ansible.utils-collection-for-playbook-creators-part-2.md rename to posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-2.md index 77ec7c5b..9a2e759b 100644 --- a/posts/archive/getting-started-with-ansible.utils-collection-for-playbook-creators-part-2.md +++ b/posts/archive/getting-started-with-ansible-utils-collection-for-playbook-creators-part-2.md @@ -4,19 +4,18 @@ date: 2022-01-24 00:00 UTC description: In this blog, we will see how ansible.utils collection can be useful in operational state assessment as an example use case. lang: en-us -title: Getting Started with Ansible.utils Collection for Playbook Creators, Part 2 +slug: getting-started-with-ansible.utils-collection-for-playbook-creators-part-2 +title: Getting Started with Ansible.utils Collection for Playbook Creators --- -# Getting Started with Ansible.utils Collection for Playbook Creators, Part 2 +# Part 2: Use case of operational state assessment using ansible.utils collection -## Use Case: Operational state assessment using ansible.utils collection - -In ansible.utils, there are a variety of plugins which we can use for +In `ansible.utils`, there are a variety of plugins which we can use for operational state assessment of network devices. I overviewed the -ansible.utils collection in part one of this two part blog series. If +`ansible.utils` collection in part one of this two part blog series. If you have not reviewed part one, I recommend you do so, since I will build on this information in this -part two blog. We will see how the ansible.utils collection can be +part two blog. We will see how the `ansible.utils` collection can be useful in operational state assessment as an example use case. In general, state assessment workflow has following steps: @@ -38,14 +37,14 @@ In general, state assessment workflow has following steps: -  Implement required configuration changes to correct drift.  - Report on the change as an audit trail. -### How can ansible.utils collection help in this workflow? +## How can ansible.utils collection help in this workflow? -The ansible.utils collection makes it easier to retrieve and parse the +The `ansible.utils` collection makes it easier to retrieve and parse the data so it can then be further assessed from a structured format. -#### Retrieving operational state in structured format using Ansible.utils.cli_parse +### Retrieving operational state in structured format using Ansible.utils.cli_parse -This module is available as ansible.utils collection. It has a variety +This module is available as `ansible.utils` collection. It has a variety of parsers which help to parse CLI output or text output. It can work with multiple remote hosts like network, Linux, or windows.it. It supports multiple parsing engines and it is extensible which means you @@ -102,7 +101,7 @@ All of the generic parsers are part of the `ansible.utils` collection and all network-related parsers are part of the `ansible.netcommon` collection. -#### Validating structured data and report errors using ansible.utils.validate +### Validating structured data and report errors using ansible.utils.validate The `Ansible.utils.validate` module is a new module available as part of the `ansible.utils` collection which works with all platforms. It has @@ -129,7 +128,7 @@ tasks: In this task we need to provide data which is supposed to be structured data. Criteria is a list of criteria. Since currently we are using jsonschema, we have criteria in json format. Engine is a sub-plugin of -the top level validate plugin. Here it is "ansible.utils.jsonschema". +the top level validate plugin. Here it is `ansible.utils.jsonschema`. Again, you can write your own engine as it is extensible.  The above task will perform following operation: @@ -138,14 +137,14 @@ The above task will perform following operation: - Validate using the 'xxxx' engine - Returns list of errors if data does not conform to the schema criteria -Currently ansible.utils.validate plugin supports following validation +Currently `ansible.utils.validate` plugin supports following validation engine: - `ansible.utils.jsonschema`: Python module to validate json data against a schema.  -Now let's use the above plugins from ansible.utils to see how we can +Now let's use the above plugins from `ansible.utils` to see how we can use them in actual scenarios. In this example we will see how to use -ansible.utils to fetch BGP operational state data, validate it against +`ansible.utils` to fetch BGP operational state data, validate it against predefined json schema and also remediate configuration drift when detected.  @@ -316,9 +315,9 @@ external events or also can be scheduled as a periodic job in Red Hat Ansible Automation Platform's Automation controller to ensure compliance with the expected operational state. -# Takeaways & Next Steps +## Takeaways & Next Steps -As shown above, the ansible.utils collection:  +As shown above, the `ansible.utils` collection:  - Makes operational state assessment easier, complementing Ansible Automation Platform's configuration management capabilities. diff --git a/posts/archive/getting-started-with-aws-ansible-module-development.md b/posts/archive/getting-started-with-aws-ansible-module-development.md new file mode 100644 index 00000000..d1163c13 --- /dev/null +++ b/posts/archive/getting-started-with-aws-ansible-module-development.md @@ -0,0 +1,460 @@ +--- +author: Jill Rouleau +date: 2020-10-06 00:00 UTC +description: If you're already using the Ansible AWS modules, there are + many ways to use your existing knowledge, skills and experience to + contribute. If you need some ideas on where to contribute, take a look + at the following post. +lang: en-us +title: Getting Started With AWS Ansible Module Development and Community Contribution +--- + +# Getting Started With AWS Ansible Module Development and Community Contribution + +We often hear from cloud admins and developers that they're interested +in giving back to Ansible and using their knowledge to benefit the +community, but they don't know how to get started.  Lots of folks may +even already be carrying new Ansible modules or plugins in their local +environments, and are looking to get them included upstream for more +broad use. + +Luckily, it doesn't take much to get started as an Ansible contributor. +If you're already using the Ansible AWS modules, there are many ways to +use your existing knowledge, skills and experience to contribute. If you +need some ideas on where to contribute, take a look at the following: + +- Creating integration tests: Creating [missing tests](https://github.com/orgs/ansible-collections/projects/4#column-9963846) + for modules is a great way to get started, and integration tests are + just Ansible tasks! +- Module porting: If you're familiar with the + [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) + Python library, there's also a [backlog of modules](https://github.com/orgs/ansible-collections/projects/4#column-9964369) + that need to be ported from boto2 to boto3. +- Repository issue triage: And of course there's always open Github + [issues](https://github.com/ansible-collections/amazon.aws/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen) + and pull requests. Testing bugs or patches and providing feedback on + your use cases and experiences is very valuable. + +## The boto3 + +Starting with Ansible 2.10, the AWS modules have been migrated out of +the [Ansible GitHub repo](https://github.com/ansible/ansible) and into +two new [Collection](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html) +repositories. + +The [Ansible-maintained Collection](https://github.com/ansible-collections/amazon.aws), +(`amazon.aws`) houses the modules, plugins, and module utilities that are managed by the Ansible +Cloud team and are included in the downstream Red Hat Ansible Automation Platform product. + +The [Community Collection](https://github.com/ansible-collections/community.aws) +(`community.aws`) houses the modules and plugins that are supported by the Ansible community.  +New modules and plugins developed by the community should be proposed to +`community.aws`. Content in this Collection that is stable and meets other acceptance criteria +has the potential to be promoted and migrated into amazon.aws. + +For more information about how to contribute to any of the +Ansible-maintained Collections, including the AWS Collections, refer to +the [Contributing to Ansible-maintained Collections section on docs.ansible.com](https://docs.ansible.com/ansible/devel/community/contributing_maintained_collections.html). + + +## AWS module development basics + +For starters, make sure you've read the +[Guidelines for Ansible Amazon AWS module development](https://docs.ansible.com/ansible/devel/dev_guide/platforms/aws_guidelines.html) +section of the Ansible Developer Guide. Some things to keep in mind: + +If the module needs to poll an API and wait for a particular status to +be returned before proceeding, add a +[waiter](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/clients.html) +to the [waiters.py](https://github.com/ansible-collections/amazon.aws/blob/main/plugins/module_utils/waiters.py) +file in the `amazon.aws` collection rather than writing a loop inside your module. For example, +the `ec2_vpc_subnet` module supports a wait parameter. When true, this instructs the module +to wait for the resource to be in an expected state before returning. +The module code for this looks like the following: + +```yaml +if module.params['wait']: + handle_waiter(conn, module, 'subnet_exists', {'SubnetIds': [subnet['id']]}, start_time) +``` + +And the corresponding waiter: + +```yaml + "SubnetExists": { + "delay": 5, + "maxAttempts": 40, + "operation": "DescribeSubnets", + "acceptors": [ + { + "matcher": "path", + "expected": True, + "argument": "length(Subnets[]) > `0`", + "state": "success" + }, + { + "matcher": "error", + "expected": "InvalidSubnetID.NotFound", + "state": "retry" + }, + ] + }, +``` + +This polls the EC2 API for `describe_subnets(SubnetIds=[subnet['id']])` +until the list of returned Subnets is greater than zero before +proceeding. If an error of `InvalidSubnetID.NotFound` +is returned, this is an expected response and the waiter code will continue. + +Use [paginators](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html) +when boto returns paginated results and build the result from the +`.build_full_result()` method of the paginator, rather than writing loops. + +Be sure to handle both `ClientError` and `BotoCoreError` in your except blocks. + +```yaml +except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: + module.fail_json_aws(e, msg="Couldn't create subnet") +``` + +All new modules should support +[check_mode](https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#check-mode-dry) +if at all possible. + +Ansible strives to provide +[idempotency](https://en.wikipedia.org/wiki/Idempotence). Sometimes +though, this is inconsistent with the way that AWS services operate. +Think about how users will interact with the service through Ansible +tasks, and what will happen if they run the same task multiple times.  +What API calls will be made?  What changed status will be reported by +Ansible on subsequent task executions? + +Whenever possible, avoid hardcoding data in modules. Sometimes it's +unavoidable, but if your contribution includes a hardcoded list of +instance types or a hard-coded +[partition](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arns-syntax), +this will likely be brought up in code review - for example, +`arn:aws:` will not match the GovCloud or China regions, and your module will not work for users +in these regions. If you've already determined there's no reasonable way +to avoid hard-coding something, please mention your findings in the pull +request. + +## Module Utilities + +There's a substantial collection of `module_utils` available for working with AWS located in the `amazon.aws` collection: + +```bash +$ ls plugins/module_utils/ +acm.py batch.py cloudfront_facts.py cloud.py core.py direct_connect.py ec2.py elb_utils.py elbv2.py iam.py __init__.py rds.py s3.py urls.py waf.py waiters.py +``` + +Of particular note, `module_utils/core.py` contains `AnsibleAWSModule()`, +which is the required base class for all new modules. This provides some +nice helpers like `client()` setup, the `fail_json_aws()` method +(which will convert boto exceptions into nice error messages and handle +error message type conversion for Python2 and Python3), and the class +will handle boto library import checks for you. + +AWS APIs tend to use and return [Camel case](https://en.wikipedia.org/wiki/Camel_case) values, while Ansible +prefers [Snake case](https://en.wikipedia.org/wiki/Snake_case). Helpers +for converting between these in are available in +`amazon.aws.module_utils.ec2`, +including +`ansible_dict_to_boto3_filter_list()`, +`boto3_tag_list_to_ansible_dict()`, +and a number of tag and policy related functions. + +## Integration Tests + +The AWS Collections primarily rely on [functional integration tests](https://docs.ansible.com/ansible/latest/dev_guide/testing_integration.html) +to exercise module and plugin code by creating, modifying, and deleting +resources on AWS. Test suites are located in the Collection repository +that contains the module being tested.  The preferred style for tests +looks like a role named for the module with a test suite per module. +Sometimes it makes sense to combine the tests for more than one module +into a single test suite, such as when a tightly coupled service +dependency exists. These will generally be named for the primary module +or service being tested.  For example, +`*_info` modules may +share a test with the service they provide information for. An aliases +file in the root of the test directory controls various settings, +including which tests are aliased to that test role. + +```bash +tests/integration/targets/ecs_cluster$ ls +aliases defaults files meta tasks + +tests/integration/targets/ecs_cluster$ cat aliases +cloud/aws +ecs_service_info +ecs_task +ecs_taskdefinition +ecs_taskdefinition_info +unsupported +``` + +In this case, several modules are combined into one test, because an +`ecs_cluster` must be +created before an +`ecs_taskdefinition` can +be created. There is a strong dependency here. + +You may also notice that ECS is not currently supported in the Ansible +CI environment.  There's a few reasons that could be, but the most +common one is that we don't allow unrestricted resource usage in the CI +AWS account. We have to create [IAM policies](https://github.com/mattclay/aws-terminator/tree/master/aws/policy) +that allow the minimum possible access for the test coverage. Other +reasons for tests being unsupported might be because the module needs +resources that we don't have available in CI, such as a federated +identity provider. See the **CI Policies and Terminator Lambda** section +below for more information. + +Another test suite status you might see is unstable. That means the test +has been observed to have a high rate of transient failures. Common +reasons include needing to wait for the resource to reach a given state +before proceeding or tests taking too long to run and exceeding the test +timer. These may require refactoring of module code or tests to be more +stable and reliable. Unstable tests only get run when the module they +cover is modified and may be retried if they fail. If you find you enjoy +testing, this is a great area to get started in! + +Integration tests should generally check the following tasks or +functions both **with and without** check mode: + +- Resource creation +- Resource creation again (idempotency) +- Resource modification +- Resource modification again (idempotency) +- Resource deletion +- Resource deletion (of a non-existent resource) + +Use `module_defaults` for +credentials when creating your integration test task file, rather than +duplicating these parameters for every task. Values specified in +`module_defaults` can be +overridden per task if you need to test how the module handles bad +credentials, missing region parameters, etc. + +```yaml +- name: set connection information for aws modules and run tasks + module_defaults: + group/aws: + aws_access_key: "{{ aws_access_key }}" + aws_secret_key: "{{ aws_secret_key }}" + security_token: "{{ security_token | default(omit) }}" + region: "{{ aws_region }}" + + block: + + - name: Test Handling of Bad Region + ec2_instance: + region: "us-nonexistent-7" + ... params … + + - name: Do Something + ec2_instance: + ... params ... + + - name: Do Something Else + ec2_instance: + ... params ... +``` + +Integration tests should make use of +[blocks](https://docs.ansible.com/ansible/latest/user_guide/playbooks_blocks.html) +with test tasks in one or more blocks and a final +`always:` block that +deletes all resources created by the tests. + +## Unit Tests + +While most modules are tested with integration tests, sometimes this is +just not feasible.  An example is when testing AWS Direct Connect. The +`community.aws.aws_direct_connect*` +modules can be used to establish a network transit link between AWS and +a private data center. This is not a task that can be done simply or +repeatedly in a CI test system. For modules that cannot practically be +integration tested, we do require [unit tests](https://docs.ansible.com/ansible/devel/dev_guide/testing_units_modules.html#testing-units-modules) +for inclusion into any AWS Ansible Collection.  The +[placebo](https://pypi.org/project/placebo/) Python library provides a +nice mechanism for recording and mocking boto3 API responses and is +preferred to writing and maintaining AWS fixtures when possible. + +## CI Policies and Terminator Lambda + +The Ansible AWS CI environment has safeguards and specific tooling to +ensure resources are properly restricted, and that test resources are +cleaned up in a reasonable amount of time. These tools live in the +[aws-terminator](https://github.com/mattclay/aws-terminator) repository. +There are three main sections of this repository to be aware of: + +1. The `aws/policy/` directory +2. The `aws/terminator/` directory +3. The `hacking/` directory + +The `aws/policy/` directory contains IAM policies used by the Ansible CI service. +We generally attempt to define the minimum AWS IAM Actions and Resources +necessary to execute comprehensive integration test coverage. For +example, rather than enabling `ec2:*`, we have multiple +statement IDs, [Sids](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_sid.html) +that specify different actions for different resource specifications. + +We permit `ec2:DescribeImages` fairly broadly in the region our CI runs in: + +```yaml + Resource: + - "*" + Condition: + StringEquals: + ec2:Region: + - '{{ aws_region }}' +``` + +But are more restrictive on which instance types can be started or run via CI: + +```yaml +- Sid: AllowEc2RunInstancesInstanceType + Effect: Allow + Action: + - ec2:RunInstances + - ec2:StartInstances + Resource: + - arn:aws:ec2:us-east-1:{{ aws_account_id }}:instance/* + Condition: + StringEquals: + ec2:InstanceType: + - t2.nano + - t2.micro + - t3.nano + - t3.micro + - m1.large # lowest cost instance type with EBS optimization supported +``` + +The `aws/terminator/` directory contains the terminator application, which we deploy to AWS +Lambda.  This acts as a cleanup service in the event that any CI job +fails to remove resources that it creates.  Information about writing a +new terminator class can be found in the terminator's +[README](https://github.com/mattclay/aws-terminator/blob/master/aws/README.md). + +The `hacking/` directory contains a playbook and two sets of policies that are intended for +contributors to use with their own AWS accounts.  The `aws_config/setup-iam.yml` +playbook creates IAM policies and associates them with two iam_groups. +These groups can then be associated with your own appropriate user: + +- *ansible-integration-ci*: This group mirrors the permissions used by + the the AWS collections CI +- *ansible-integration-unsupported*: The group assigns additional + permissions on top of the 'CI' permissions required to run the + 'unsupported' tests + +Usage information to deploy these groups and policies to your AWS user +is documented in the [setup-iam.yml](https://github.com/mattclay/aws-terminator/blob/master/hacking/aws_config/setup-iam.yml) +playbook. + +## Testing Locally + +You've now written your code and your test cases, but you'd like to run +your tests locally before pushing to GitHub and sending the change +through CI.  Great!  You'll need credentials for an AWS account and a +few setup steps.  + +Ansible includes a CLI utility to run integration tests.  You can either +set up a [boto profile](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html) +in your environment or use a credentials config file to authenticate to +AWS.  A [sample config](https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/config/cloud-config-aws.ini.template) +file is provided by the ansible-test application included with Ansible.  +Copy this file to `tests/integration/cloud-config-aws.ini` in your local checkout of +the collection repository and fill in your AWS account details for +`@ACCESS_KEY`, `@SECRET_KEY`, `@SECURITY_TOKEN`, `@REGIO`N. + +**NOTE:** Both AWS Collection repositories have a +`tests/.gitignore` file that will ignore this file path when checking in code, but you should +always be vigilant when storing AWS credentials to disk or in a +repository directory. + +If you already have Ansible installed  on your local machine, +`ansible-test` should already be in your PATH.  If not, you can run it from a local checkout +of the Ansible project. + +```bash +git clone https://github.com/ansible/ansible.git +cd ansible/ +source ansible/hacking/env-setup +``` + +You will also need to ensure that any Collection dependencies are +installed and accessible in your +[COLLECTIONS_PATHS](https://docs.ansible.com/ansible/devel/reference_appendices/config.html#collections-paths).  +Collection dependencies are listed in the +`tests/requirements.yml` file in the Collection and can be installed with the +`ansible-galaxy collection install` command. + +You can now run integration tests from the Collection repository: + +```bash +cd ~/src/collections/ansible_collections/amazon/aws +ansible-test integration ec2_group +``` + +Tests that are unstable or unsupported will not be executed by default.  +To run these types of tests, there are additional flags you can pass to +`ansible-test`: + +```bash +ansible-test integration ec2_group --allow-unstable --allow-unsupported +``` + +If you prefer to run the tests in a container, there is a default test +image that `ansible-test` +can automatically retrieve and run that contains the necessary Python +libraries for AWS tests.  This can be pulled and run by providing the +`--docker` flag.  (Docker must already be installed and configured on your local system.) + +```bash +ansible-test integration ec2_group --allow-unstable --allow-unsupported --docker +``` + +The test container image ships with all Ansible-supported versions of +Python.  To specify a particular Python version, such as 3.7, test with: + +```bash +ansible-test integration ec2_group --allow-unstable --allow-unsupported --docker --python 3.7 +``` + +**NOTE:** Integration tests will create real resources in the specified +AWS account subject to AWS pricing for the resource and region.  +Existing tests should make every effort to remove resources at the end +of the test run, but make sure to check that all created resources are +successfully deleted after executing a test suite to prevent billing +surprises.  This is especially recommended when developing new test +suites or adding new resources not already covered by the test's always +cleanup block.   + +**NOTE:** Be cautious when working with IAM, security groups, and other +access controls that have the potential to expose AWS account access or +resources. + +## Submitting a Change + +When your change is ready to submit, open a [pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests) +(PR) in the GitHub repository for the [appropriate AWS Collection](https://docs.ansible.com/ansible/devel/community/contributing_maintained_collections.html).  +Shippable CI will automatically run tests and report the results back to +the PR.  If your change is for a new module or tests new AWS resources +or actions, you may see permissions failures in the test.  In that case, +you will also need to open a PR in the [mattclay/aws-terminator repository](https://github.com/mattclay/aws-terminator/) +to add IAM permissions and possibly a [Terminator class](https://github.com/mattclay/aws-terminator/blob/master/aws/README.md) +to support testing the new functionality, as described in the +**CI Policies and Terminator Lambda** section of +this post.  Members of the Ansible AWS community will triage and review +your contribution, and provide any feedback they have on the +submission.   + +## Next Steps and Resources + +Contributing to open source projects can be daunting at first, but +hopefully this blog post provides a good technical resource on how to +contribute to the AWS Ansible Collections. If you need assistance with +your contribution along the way, you can find the Ansible AWS community +on [Freenode](https://freenode.net/) IRC in channel #ansible-aws. + +Congratulations and welcome, you are now a contributor to the Ansible +project! diff --git a/posts/archive/installing-and-using-collections-on-ansible-tower.md b/posts/archive/installing-and-using-collections-on-ansible-tower.md index 9ce6ac8d..98222290 100644 --- a/posts/archive/installing-and-using-collections-on-ansible-tower.md +++ b/posts/archive/installing-and-using-collections-on-ansible-tower.md @@ -1,6 +1,6 @@ --- author: Ajay Chenampara -date: 2020-06-01 00:00 UTC +date: 2020-07-01 00:00 UTC description: In this blog post we'll walk through using Ansible Collections with Ansible Tower, part of the Red Hat Ansible Automation Platform. diff --git a/posts/archive/migrating-existing-content-into-a-dedicated-ansible-collection.md b/posts/archive/migrating-existing-content-into-a-dedicated-ansible-collection.md new file mode 100644 index 00000000..0b922e03 --- /dev/null +++ b/posts/archive/migrating-existing-content-into-a-dedicated-ansible-collection.md @@ -0,0 +1,248 @@ +--- +author: XLAB Steampunk +date: 2020-04-08 00:00 UTC +description: In this blog post we will demonstrate how to migrate part + of the existing Ansible content (modules and plugins) into a dedicated + Ansible Collection. +lang: en-us +title: Migrating existing content into a dedicated Ansible collection +--- + +# Migrating existing content into a dedicated Ansible collection + +Today, we will demonstrate how to migrate part of the existing Ansible +content (modules and plugins) into a dedicated Ansible Collection. We +will be using modules for managing DigitalOcean's resources as an +example so you can follow along and test your development setup. But +first, let us get the big question out of the way: Why would we want to +do that?  + +## Ansible on a Diet + +In late March 2020, Ansible's main development branch lost almost all +of its modules and plugins. Where did they go? Many of them moved to the +[ansible-collections GitHub organization](https://github.com/ansible-collections). +More specifically, the vast majority landed in the +[community.general](https://github.com/ansible-collections/community.general) +GitHub repository that serves as their temporary home (refer to the +[Community overview README](https://github.com/ansible-collections/overview) +for more information).  + +The ultimate goal is to get as much content in the community.general +Ansible Collection "adopted" by a caring team of developers and moved +into a dedicated upstream location, with a dedicated [Galaxy namespace](https://galaxy.ansible.com/docs/contributing/namespaces.html). +Maintainers of the newly migrated Ansible Collection can then set up the +development and release processes as they see fit, (almost) free from +the requirements of the comunity.general collection. For more +information about the future of Ansible content delivery, please refer +to [an official blog post](https://www.ansible.com/blog/the-future-of-ansible-content-delivery), +a [Steampunk perspective](https://steampunk.si/posts/the-galactic-future-of-ansible-content/), +as well as an [AnsibleFest talk](https://www.ansible.com/how-to-build-ansible-collections-experience-from-community-members) +about Ansible Collections.  + +Without any further ado, let us get our hands dirty by creating a brand +new DigitalOcean Ansible Collection. + +## The Migration Process + +There are three main reasons why we selected DigitalOcean-related +content for migration: + +1. It contains just enough material that it is not entirely trivial to + migrate (we will need to use some homemade tools during the + migration process). +2. DigitalOcean modules use standard features like documentation + fragments and utility Python packages, which means that merely + copying the modules over will not be enough. +3. It is currently part of the community.general Ansible Collection. + +**Edit (2020-09-23):** The DigitalOcean modules were moved to the +`community.digitalocean` collection in July 2020, so the last +point from the list above does not hold anymore. But I guess the move +confirmed that our selection of an example was correct + +So it should not come as a surprise that content migration is a +multi-step process. We need to create a working directory, clone the +community.general Ansible Collection into it, and create an empty +destination collection. But before we can do any of that, we must decide +on the name of this new Ansible Collection. + +It is a well known fact that there are only two hard things in Computer +Science: cache invalidation, naming things, and off-by-one errors ;) +Fortunately, in our case, finding a proper name is relatively simple: +because we are moving all modules for working with DigitalOcean's cloud +platform, we will name it *digital_ocean.digital_ocean*. + +```bash +$ mkdir -p ~/work/ansible_collections +$ cd ~/work/ansible_collections +$ mkdir community +$ git clone --depth 1 --branch 0.2.0 \ + https://github.com/ansible-collections/community.general.git \ + community/general +$ ansible-galaxy collection init digital_ocean.digital_ocean +$ cd digital_ocean/digital_ocean +``` + +With the directories in place, we can start copying the content over +into our new Ansible Collection. But instead of just moving the modules +over, we will also take the opportunity to rename the modules.  + +DigitalOcean-related module names all have the *digital_ocean_* prefix +because up until Ansible 2.8, all modules lived in the same global +namespace. Now that we are moving them into a separate namespace, we can +safely drop that prefix. We could do that manually, but writing a few +lines of Bash will be faster and more intellectually satisfying:  + +```bash +$ source=../../community/general +$ mkdir -p plugins/modules +$ for m in $(find $source/plugins/modules/ -name 'digital_ocean_*.py' -type f) +> do +> new_name=$(basename $m | sed 's/digital_ocean_//') +> echo " Copying $(basename $m) -> $new_name" +> cp $m plugins/modules/$new_name +> done +``` + +Next, we need to copy over utility Python files that our modules use. We +can get a list of all such modules by searching for the *module_utils* +imports: + +```bash +$ grep -h "ansible_collections.community.general.plugins.module_utils." \ + plugins/modules/*.py | sort | uniq +``` + +We need to move a single Python file over and then fix the import +statements, which is easy enough to automate: + +```bash +$ mkdir plugins/module_utils +$ cp ../../community/general/plugins/module_utils/digital_ocean.py \ + plugins/module_utils/ +$ sed -i -e 's/ansible_collections.community.general.plugins/./' \ + plugins/modules/*.py +``` + +The last thing that we need to fix is the documentation. Because we +renamed the modules during the move, we need to drop the +*digital_ocean_* prefix from the module: `digital_ocean_${module_name}` +part of the `DOCUMENTATION` block. We also need to adjust the EXAMPLES +section and replace the old module names with fully qualified ones. sed +time again: + +```bash +$ sed -i -r \ + -e 's/module: +digital_ocean_([^ ]+)/module: 1/' \ + -e 's/digital_ocean_([^ ]+):/digital_ocean.digital_ocean.1:/' \ + plugins/modules/*.py +``` + +We also need to copy over any documentation fragments that our modules +use. We can identify them by searching for the *community.general.* +string in our modules:  + +```bash +$ grep -h -- "- community.general." plugins/modules/*.py | sort | uniq +``` + +Now, we must repeat the steps we did with the utility files: copy the +documentation fragment files over and update the references. Again, +because our fragment now lives in its own dedicated namespace, we can +rename it into something more meaningful. Since our documentation +fragment contains definitions of common parameters, we will call it +*common*. And we promise that this is the last fix that we do using sed +and regular expressions. ;) + +```bash +$ mkdir plugins/doc_fragments +$ cp ../../community/general/plugins/doc_fragments/digital_ocean.py \ + plugins/doc_fragments/common.py +$ sed -i -e 's/community.general.digital_ocean.documentation/digital_ocean.digital_ocean.common/' \ + plugins/modules/*.py +``` + +And we are done. Time to pat ourselves on the back and commit the work: + +```bash +$ git init && git add . && git commit -m "Initial commit" +``` + +If you are only interested in the final result of this migration, you +can download it from the +[digital_ocean.digital_ocean](https://github.com/xlab-si/digital_ocean.digital_ocean) +GitHub repo.  + +## Taking Our New Collection for a Ride + +If we want to test our newly created DigitalOcean Ansible Collection, we +need to tell Ansible where it should search for it. We can do that by +settings the *ANSIBLE_COLLECTIONS_PATHS* environment variable to point +to our work directory. How will we know if things work? We will kindly +ask Ansible to print the module documentation for us.  + +```bash +$ export ANSIBLE_COLLECTIONS_PATHS=~/work +$ ansible-doc digital_ocean.digital_ocean.domain +``` + +If all went according to plans, the last command brought up the +documentation for the domain module. Note that we used the domain +module's fully-qualified collection name (FQCN) in the last command. If +we leave out the namespace and collection name parts, Ansible will not +be able to find our module. + +And as the ultimate test, we can also run a simple playbook like this one: + +```yaml +--- +- name: DigitalOcean test playbook + hosts: localhost + gather_facts: false + + tasks: + - name: Create a new droplet + digital_ocean.digital_ocean.droplet: + name: mydroplet + oauth_token: OAUTH_TOKEN + size: 2gb + region: sfo1 + image: centos-8-x64 + wait_timeout: 500 +``` + +When we execute the ansible-playbook play.yaml command, Ansible will +yell at us because we provided an invalid authentication token. But we +should not be sad because the error message proves that our module is +working as expected.  + +## Where to Go from Here + +In today's post, we demonstrated what the initial steps of content +migration are. But the list does not end here. If we were serious about +maintaining Ansible Collections such as this, we would need to add tests +for our modules and set up the CI/CD integration.  + +Another thing that we left out of this post is how to push the newly +created Ansible Collection to Ansible Galaxy. We did this not because +publishing a collection is particularly hard, but because it is almost +too easy. All one has to do is get hold of the digital_ocean namespace +and then run the next two commands: + +```bash +$ ansible-galaxy collection build +$ ansible-galaxy collection publish \ + --api-key {galaxy API key here} \ + digital_ocean-digital_ocean-1.0.0.tar.gz +``` + +Publishing a collection that we do not intend to maintain would be a +disservice to the Ansible community. Quality over quantity. + +If you need help with migrating existing content into a dedicated +Ansible Collection and maintaining it on the long run, +[contact our experts](https://steampunk.si/#contact-us) +and they will gladly help you out. + +Cheers!  diff --git a/posts/archive/red-hat-and-ibm-and-the-ansible-community.md b/posts/archive/red-hat-and-ibm-and-the-ansible-community.md new file mode 100644 index 00000000..2750fb01 --- /dev/null +++ b/posts/archive/red-hat-and-ibm-and-the-ansible-community.md @@ -0,0 +1,52 @@ +--- +author: The Ansible Community Team +date: 2019-07-15 00:00 UTC +description: "Now that Red Hat is a part of IBM, some people may wonder + about the future of the Ansible project. Here is the good news: the + Ansible community strategy has not changed." +lang: en-us +title: The Song Remains The Same +--- + +# The Song Remains The Same + +[Now that Red Hat is a part of IBM](https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future), +some people may wonder about the future of the Ansible project. + +Here is the good news: the Ansible community strategy has not changed. + +As always, we want to make it as easy as possible to work with any +projects and communities who want to work with Ansible. With the +resources of IBM behind us, we plan to accelerate these efforts. We want +to do more integrations with more open source communities and more +technologies. + +One of the reasons we are excited for the merger is that IBM understands +the importance of a broad and diverse community. Search for "Ansible +plus open source project" and you can find Ansible information, such +as playbooks and modules and blog posts and videos and slide decks, +intended to make working with that project easier. We have thousands of +people attending Ansible meetups and events all over the world. We have +millions of downloads. We have had this momentum because we provide +users flexibility and freedom. IBM is committed to our independence as a +community so that we can continue this work. + +We've worked hard to be good open source citizens. We value the trust +that we've built with our users and our contributors, and we intend to +continue to live up to the trust that our community has placed in us. +IBM is committed to the same ideals and will be supportive of our +ongoing efforts to build a strong, diverse community. The song remains +the same. + +If you have questions or would like to learn more about the IBM +acquisition, we encourage you to review the list of materials below. Red +Hat CTO Chris Wright will host an online Q&A session July 23 in the +coming days where you can ask questions you may have about what the +acquisition means for Red Hat and our involvement in open source +communities. Details will be announced on the [Red Hat blog](https://www.redhat.com/en/blog). + +Additional resources: + +- [Press release](https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future) +- [Blog from Chris Wright](https://www.redhat.com/en/blog/red-hat-and-ibm-accelerating-adoption-open-source) +- [IBM + Red Hat FAQ for Communities](https://community.redhat.com/blog/2019/07/faq-for-communities/) diff --git a/posts/archive/the-future-of-ansible-content-delivery.md b/posts/archive/the-future-of-ansible-content-delivery.md new file mode 100644 index 00000000..cf66b9aa --- /dev/null +++ b/posts/archive/the-future-of-ansible-content-delivery.md @@ -0,0 +1,88 @@ +--- +author: Dylan Silva +date: 2019-07-23 00:00 UTC +description: Dylan Silva goes through how the future of how Ansible + content is delivered. +lang: en-us +title: The Future of Ansible Content Delivery +--- + +# The Future of Ansible Content Delivery + +Everyday, I'm in awe of what Ansible has grown to be. The incredible +growth of the community and viral adoption of the technology has +resulted in a content management challenge for the project. + +I don't want to echo a lot of what's been said by our dear friend +[Jan-Piet Mens](https://jpmens.net/2019/06/21/i-care-about-ansible/) +or our incredible Community team, but give me a moment to take a shot at it. + +Our main challenge is rooted in the ability to scale. The volume of pull +requests and issues we see day to day severely outweigh the ability of +the Ansible community to keep up with that rate of change. + +As a result, we are embarking on a journey. This journey is one that we +know that the community, both our content creators and content +consumers, will be interested in hearing about. + +This New World Order (tongue in cheek), as we've been calling it, is a +model that will allow for us to empower the community of contributors of +Ansible content (read: modules, plugins, and roles) to provide their +content at their own pace. + +To do this, we have made some changes to how Ansible leverages content +that is not "shipped" with it. In short, Ansible content will not have +to be a part of a milestone Core release of the Engine itself. We will +be leveraging a delivery process and content structure/format that helps +alleviate a lot of the ambiguity and pain that is currently there due to +tying plugins to the Core Engine. + +The cornerstone of this journey is something you may have heard +rumblings of out in the interwebs. This thing is called a Ansible +Content Collection, or Collection(s), for short. + +To create Ansible Content Collections, we took a look at a lot of things +already in practice. We looked at other tools, other packaging formats, +delivery engines, repositories, and ultimately, ourselves. In all of +that investigation we feel we have come up with a pretty sound spec. +Below we cover some details of that. + +A Collection is a strict project/directory structure for Ansible +Content. Similar to the role directory structure; we are now +highlighting what is important to Ansible Playbook execution. Here's a +graphic of that spec, created by my teammate, Tim Appnel. + +![Screenshot_future-of-content-1](/images/posts/archive/screenshot_future-of-content-1.webp) + +As you can see, this structure does look very similar to roles. There +are some slight differences though. Notice that the roles directory no +longer contains a library folder? The idea here is that a Collection +itself is the true encapsulation of every piece of content relevant to +it, and the playbook that is executing that content. So we've taken the +libraries out of the various roles that could live in a collection, and +placed them at the top level in the plugins directory. There, all types +of plugins (yes modules are there because modules are actually plugins) +will be usable by the roles and ultimately all playbooks that could +potentially call them. Because this content will be "installed" in a +location that the Engine is aware of, and will know to look for content +that is being called in the playbook. + +Also, with these changes, we have introduced some namespacing concepts +into playbooks as well. Here's another graphic, by Tim, that is a +snippet out of a playbook that highlights that namespacing. + +![Screenshot_future-of-content-2](/images/posts/archive/screenshot_future-of-content-2.png) + +So what we've got here is a very simple playbook. In this playbook we +have highlighted the list of Collections that we're interested in using. +For each task, we are using the FQCN (Fully Qualified Collection +Namespace) path to the module. Of course, we still want to make this +simple. So playbook creators won't have to always fully qualify their +content path. As you see in the fourth task, creators can still use the +shorthand name of a module. Ansible will search the path of collections +in a first come first serve approach, as defined in Ansible +configuration or within the play itself. + +That's about all I've got for going into Collections. + +Happy Automating folks!