Skip to content
This repository has been archived by the owner on Nov 7, 2024. It is now read-only.

Commit

Permalink
Merge pull request #116 from JulioPDX/ci-workshop-fixes
Browse files Browse the repository at this point in the history
Fixes from virtual CICD workshop
  • Loading branch information
mthiel117 authored Oct 23, 2023
2 parents 414cf82 + b6efceb commit 4d0fd0e
Show file tree
Hide file tree
Showing 5 changed files with 56 additions and 44 deletions.
4 changes: 2 additions & 2 deletions .devcontainer/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ identify>=1.4.20
idna
importlib-resources
isort==5.10.1
jsonschema
Jinja2
jsonschema
MarkupSafe
material
md-toc
Expand All @@ -32,9 +32,9 @@ molecule>=3.2.0,<3.5.0
molecule-docker>=0.2.4
natsort
netaddr
netmiko
packaging
paramiko
netmiko
pre-commit>=2.9.2
pre-commit-hooks>=3.3.0
psutil
Expand Down
9 changes: 5 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,15 @@
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.4.0
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
exclude_types: [svg, json]
- id: requirements-txt-fixer

- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.33.0
rev: v0.37.0
hooks:
- id: markdownlint
name: Check for Linting errors on MarkDown files
Expand All @@ -19,7 +20,7 @@ repos:
- --fix

- repo: https://github.com/tcort/markdown-link-check
rev: v3.10.3
rev: v3.11.2
hooks:
- id: markdown-link-check
name: Markdown Link Check
Expand All @@ -33,7 +34,7 @@ repos:
- --config=config.json

- repo: https://github.com/errata-ai/vale
rev: ab5fe92
rev: v2.29.6
hooks:
- id: vale
files: workshops/
3 changes: 1 addition & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,9 @@ MarkupSafe
md-toc
mdx_truly_sane_lists
mkdocs
mkdocs-material
mkdocs-git-revision-date-plugin
mkdocs-glightbox
mkdocs-include-dir-to-nav
mkdocs-material-extensions
mkdocs-material
mkdocs-pymdownx-material-extras
pre-commit
30 changes: 15 additions & 15 deletions workshops/avd-lab-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@ In this example, the ATD lab is used to create the L2LS Dual Data Center topolog
| s2-host1 | 10.30.30.100 |
| s2-host2 | 10.40.40.100 |

## **Prepare Lab Environment**
## **Step 1 - Prepare Lab Environment**

### STEP #1 - Access the ATD Lab
### Access the ATD Lab

Connect to your ATD Lab and start the Programmability IDE. Next, create a new Terminal.

### STEP #2 - Fork and Clone branch to ATD Lab
### Fork and Clone branch to ATD Lab

An ATD Dual Data Center L2LS data model is posted on [GitHub](https://github.com/aristanetworks/ci-workshops-avd).

Expand Down Expand Up @@ -52,7 +52,7 @@ git config --global user.name "FirstName LastName"
git config --global user.email "name@example.com"
```

### STEP #3 - Update AVD
### Update AVD

AVD has been pre-installed in your lab environment. However, it may be on an older version (in some cases a newer version). The following steps will update AVD and modules to the valid versions for the lab.

Expand All @@ -68,7 +68,7 @@ pip3 install -r ${ARISTA_AVD_DIR}/arista/avd/requirements.txt

You must run these commands when you start your lab or a new shell (terminal).

### STEP #4 - Setup Lab Password Environment Variable
### Setup Lab Password Environment Variable

Each lab comes with a unique password. We set an environment variable called `LABPASSPHRASE` with the following command. The variable is later used to generate local user passwords and connect to our switches to push configs.

Expand All @@ -86,7 +86,7 @@ echo $LABPASSPHRASE

You must run this step when you start your lab or a new shell (terminal).

### STEP #5 - Prepare WAN IP Network and Test Hosts
### Prepare WAN IP Network and Test Hosts

The last step in preparing your lab is to push pre-defined configurations to the WAN IP Network (cloud) and the four hosts used to test traffic. The spines from each site will connect to the WAN IP Network with P2P links. The hosts (two per site) have port-channels to the leaf pairs and are pre-configured with an IP address and route to reach the other hosts.

Expand All @@ -96,7 +96,7 @@ Run the following to push the configs.
make preplab
```

## **Build and Deploy Dual Data Center L2LS Network**
## **Step 2 - Build and Deploy Dual Data Center L2LS Network**

This section will review and update the existing L2LS data model. We will add features to enable VLANs, SVIs, connected endpoints, and P2P links to the WAN IP Network. After the lab, you will have enabled an L2LS dual data center network through automation with AVD. YAML data models and Ansible playbooks will be used to generate EOS CLI configurations and deploy them to each site. We will start by focusing on building out Site 1 and then repeat similar steps for Site 2. Finally, we will enable connectivity to the WAN IP Network to allow traffic to pass between sites.

Expand All @@ -108,9 +108,9 @@ This section will review and update the existing L2LS data model. We will add fe
4. Verify routing
5. Test traffic

## **Site 1**
## **Step 3 - Site 1**

### STEP #1 - Build and Deploy Initial Fabric
### Build and Deploy Initial Fabric

The initial fabric data model key/value pairs have been pre-populated in the following group_vars files in the `sites/site_1/group_vars/` directory.

Expand Down Expand Up @@ -160,7 +160,7 @@ show port-channel

The basic fabric with MLAG peers and port-channels between leaf and spines are now created. Next up, we will add VLAN and SVI services to the fabric.

### STEP #2 - Add Services to the Fabric
### Add Services to the Fabric

The next step is to add Vlans and SVIs to the fabric. The services data model file `SITE1_FABRIC_SERVICES.yml` is pre-populated with Vlans and SVIs `10` and `20` in the default VRF.

Expand Down Expand Up @@ -233,7 +233,7 @@ See the difference between the running config and the latest checkpoint file.
diff checkpoint:< filename > running-config
```

### STEP #3 - Add Ports for Hosts
### Add Ports for Hosts

Let's configure port-channels to our hosts (`s1-host1` and `s1-host2`).

Expand Down Expand Up @@ -266,7 +266,7 @@ PING 10.20.20.100 (10.20.20.100) 72(100) bytes of data.

Site 1 fabric is now complete.

## **Site 2**
## **Step 4 - Site 2**

Repeat the previous three steps for Site 2.

Expand All @@ -277,7 +277,7 @@ Repeat the previous three steps for Site 2.

At this point, you should be able to ping between hosts within a site but not between sites. For this, we need to build connectivity to the `WAN IP Network`. This is covered in the next section.

## **Connect Sites to WAN IP Network**
## **Step 5 - Connect Sites to WAN IP Network**

The WAN IP Network is defined by the `core_interfaces` data model. Full data model documentation is located **[here](https://avd.arista.com/4.1/roles/eos_designs/docs/tables/core-interfaces.html?h=core+interfaces)**.

Expand Down Expand Up @@ -395,7 +395,7 @@ ping 10.40.40.100

You have built a multi-site L2LS network without touching the CLI on a single switch.

## **Day 2 Operations**
## **Step 6 - Day 2 Operations**

Our multi-site L2LS network is working great. But, before too long, it will be time to change
our configurations. Lucky for us, that time is today!
Expand Down Expand Up @@ -810,7 +810,7 @@ git commit -m 'add leafs'
git push --set-upstream origin add-leafs
```

## **Backing out changes**
## **Step 7 - Backing out changes**

Ruh Roh. As it turns out, we should have added these leaf switches to an entirely new site. Oops! No worries, because
we used our **add-leafs** branch, we can switch back to our main branch and then delete our local copy of the **add-leafs**
Expand Down
54 changes: 33 additions & 21 deletions workshops/cicd-basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,10 @@ Throughout this section, we will use the following dual data center topology. Cl

This repository leverages the dual data center (DC) ATD. If you are not leveraging the ATD, you may still leverage this repository for a similar deployment. Please note that some updates may have to be made for the reachability of nodes and CloudVision (CVP) instances. This example was created with [Ansible AVD](https://avd.arista.com/4.1/index.html) version `4.1`.

### Local installation
### Installation external to the ATD environment (optional)

If running outside of the ATD interactive developer environment (IDE), you must install the base requirements.
!!! note
If running outside of the ATD interactive developer environment (IDE), you must install the base requirements.

```shell
python3 -m venv venv
Expand All @@ -31,7 +32,7 @@ export ARISTA_AVD_DIR=$(ansible-galaxy collection list arista.avd --format yaml
pip3 install -r ${ARISTA_AVD_DIR}/arista/avd/requirements.txt
```

## Fork and clone the repository
## **Step 1 - Fork and clone the repository**

You will be creating your own CI/CD pipeline in this workflow. Log in to your GitHub account and fork the [`ci-workshops-avd`](https://github.com/aristanetworks/ci-workshops-avd/) repository to get started.

Expand Down Expand Up @@ -66,7 +67,7 @@ You will be creating your own CI/CD pipeline in this workflow. Log in to your Gi
git config --global user.email "name@example.com"
```

### ATD programmability IDE installation
## **Step 2 - ATD programmability IDE installation**

You can check the current AVD version by running the following command:

Expand Down Expand Up @@ -100,7 +101,7 @@ export ARISTA_AVD_DIR=$(ansible-galaxy collection list arista.avd --format yaml
pip3 install -r ${ARISTA_AVD_DIR}/arista/avd/requirements.txt
```

### Fast-forward the main brach
## **Step 3 - Fast-forward the main brach**

On the programmability IDE, merge the `cicd-ff` branch into the `main` branch.

Expand All @@ -114,7 +115,15 @@ git merge origin/cicd-ff
???+ note
You may get a note to edit the commit message, enter ***windows*** ++ctrl++ + X or ***mac*** ++cmd++ + X to save the message and exit out of the text editor.

### Setup lab password environment variable
If you got the dreaded `merge: origin/cicd-ff - not something we can merge` error, you may have missed unchecking the `Copy the main branch only` option when forking. You can continue by running the following commands within the workshops directory on the IDE terminal.

```shell
git remote add upstream https://github.com/aristanetworks/ci-workshops-avd.git
git fetch upstream
git merge upstream/cicd-ff
```

## **Step 4 - Setup lab password environment variable**

Each lab comes with a unique password. We set an environment variable called `LABPASSPHRASE` with the following command. The variable is later used to generate local user passwords and connect to our switches to push configs.

Expand All @@ -125,7 +134,7 @@ Each lab comes with a unique password. We set an environment variable called `LA
export LABPASSPHRASE=`cat /home/coder/.config/code-server/config.yaml| grep "password:" | awk '{print $2}'`
```

### Configure the IP Network
## **Step 5 - Configure the IP Network**

The nodes that connect the two sites are out of scope for this workshop. We can get the hosts and EOS nodes in the IP network configured by running the `make preplab` command.

Expand All @@ -135,7 +144,7 @@ make preplab

The host and IP Network nodes will now be configured.

### Enable GitHub actions
## **Step 6 - Enable GitHub actions**

1. Go to Actions
2. Click `I understand my workflows, go ahead and enable them`
Expand All @@ -155,7 +164,7 @@ You will need to set one secret in your newly forked GitHub repository.

5. Enter the secret as follows

- Name: LABPASSPHRASE
- Name: `LABPASSPHRASE`
- Secret: Listed in ATD lab topology

![Lab credentials](assets/images/lab-creds.png)
Expand All @@ -166,7 +175,7 @@ You will need to set one secret in your newly forked GitHub repository.
!!! note
Our workflow uses this secret to authenticate with our CVP instance.

## Update local CVP variables
## **Step 7 - Update local CVP variables**

Every user will get a unique CVP instance deployed. There are two updates required.

Expand Down Expand Up @@ -199,7 +208,7 @@ Every user will get a unique CVP instance deployed. There are two updates requir
!!! note
These will be the same value. Make sure to remove any prefix like `https://` or anything after `.com`

## Sync with remote repository
## **Step 8 - Sync with remote repository**

1. From the IDE terminal, run the following:

Expand All @@ -212,7 +221,7 @@ Every user will get a unique CVP instance deployed. There are two updates requir
!!! note
If the Git `user.name` and `user.email` are set, they may be skipped. You can check this by running the `git config --list` command. You will get a notification to sign in to GitHub. Follow the prompts.

## Create a new branch
## **Step 9 - Create a new branch**

In a moment, we will be deploying changes to our environment. In reality, updates to a code repository would be done from a development or feature branch. We will follow this same workflow.

Expand All @@ -223,7 +232,7 @@ In a moment, we will be deploying changes to our environment. In reality, update
git checkout -b dc-updates
```

## GitHub Actions
## **Step 10 - GitHub Actions**

GitHub Actions is a CI/CD platform within GitHub. We can leverage GitHub Actions to create automated workflows within our repository. These workflows can be as simple as notifying appropriate reviewers of a change and automating the entire release of an application or network infrastructure.

Expand Down Expand Up @@ -282,7 +291,7 @@ jobs:
...
```

#### pre-commit
### pre-commit

To get started with pre-commit, run the following commands in your ATD IDE terminal.

Expand Down Expand Up @@ -336,7 +345,7 @@ Finally, the setup Python and install requirements action above the pre-commit s
...
```

##### pre-commit example
#### pre-commit example

We can look at the benefits of pre-commit by introducing three errors in a group_vars file. This example will use the `sites/site_1/group_vars/SITE1_FABRIC_SERVICES.yml` file. Under VLAN 20, we can add extra whitespace after any entry, extra newlines, and move the `s1-spine2` key under the `s1-spine1` key.

Expand All @@ -356,10 +365,13 @@ We can look at the benefits of pre-commit by introducing three errors in a group
# <- Newline
```

We can run pre-commit manually by running the `pre-commit run -a` command.
We can run pre-commit manually by running the following command:

```shell
➜ ci-workshops-avd git:(main) ✗ pre-commit run -a
pre-commit run -a
```

```shell title='Output'
trim trailing whitespace.................................................Passed
fix end of files.........................................................Failed
- hook id: end-of-file-fixer
Expand Down Expand Up @@ -405,7 +417,7 @@ check yaml...............................................................Passed
➜ ci-workshops-avd git:(main)
```

#### Filter changes to the Pipeline
### Filter changes to the Pipeline

Currently, our workflow will build and deploy configurations for both sites. This is true even if we only have changes relevant to one site. We can use a path filter to check if changes within specific directories have been modified, signaling that a new build and deployment are required. Please take note of the `id` key. This will be referenced in our upcoming workflow steps.

Expand All @@ -432,7 +444,7 @@ Currently, our workflow will build and deploy configurations for both sites. Thi
...
```

#### Conditionals to control flow
### Conditionals to control flow

The Ansible collection install and test configuration steps have the conditional key of `if`. This maps to each path filter check step we used earlier. For example, the first path check has an `id` of `filter-site1`. We can reference the `id` in our workflow as `steps.filter-site1.outputs.workflows`. If this is set to `true`, a change will register in our check, and the test build step for site 1 will run. One difference is the Ansible collection install uses the `||` (or) operator. The "or" operator allows us to control when Ansible collections are installed. The collections will be installed if a change is registered in either `filter-site1` or `filter-site2`.

Expand Down Expand Up @@ -514,7 +526,7 @@ At this point, make sure both workflow files (`dev.yml` and `prod.yml`) within t
if: steps.filter-site2.outputs.workflows == 'true'
```

## Day-2 Operations - New service (VLAN)
## **Step 11 - Day-2 Operations - New service (VLAN)**

This example workflow will add two new VLANs to our sites. Site 1 will add VLAN 25, and site 2 will add VLAN 45. An example of the updated group_vars is below. The previous workshop modified the configuration of our devices directly through eAPI. This example will leverage GitHub actions with CloudVision to update our nodes. The provisioning with CVP will also create a new container topology and configlet assignment per device. For starters, we can update site 1.

Expand Down Expand Up @@ -647,7 +659,7 @@ Once complete, the GitHub actions will show changes on sites 1 and 2.

![Actions](assets/images/actions-both.png)

## Creating a pull request to deploy updates (main branch)
## **Step 12 - Creating a pull request to deploy updates (main branch)**

We have activated our GitHub workflows and tested our configurations. We are now ready to create a pull request.

Expand Down

0 comments on commit 4d0fd0e

Please sign in to comment.