Skip to content

Update Gitlab Version

Adam Novak edited this page Aug 13, 2024 · 11 revisions

Gitlab Background

Toil does CI through a Gitlab instance at ucsc-ci.com.

The instance lives in AWS EC2, in the us-west-2 region of the ucsc-platform-dev account. It is connected to a domain in AWS Route 53, by having the instance be assigned a static Elastic IP and hardcoding the domain to point to it. The instance was (manually???) configured to terminate SSL itself.

The setup was originally built out using a Terraform template for the "allspark" system, which set it up with a load balancer, but the load balancer has since been ripped out manually.

We've also applied a custom Netplan configuration to work around the AWS DHCP servers not having 100% uptime; if systemd-networkd goes to renew the DHCP lease and the server doesn't answer right away, it lets the lease expire. We have both a static and a dynamic IP set to try and keep the instance online without breaking the ability to reach it at a new IP assigned by AWS.

cat >./99-static-ip-and-dhcp.yaml <<'EOF'
network:
    ethernets:
        eth0:
            dhcp4: true
            dhcp6: false
            match:
                macaddress: 0a:e4:cd:73:f0:a3
            set-name: eth0
            addresses: 
                - 172.31.10.143/20
            gateway4: 172.31.0.1
            nameservers:
                search:
                    - us-west-2.compute.internal
                addresses: [172.31.0.2]
    version: 2
EOF

netplan try --config-file 99-static-ip-and-dhcp.yaml --timeout 60

mv 99-static-ip-and-dhcp.yaml /etc/netplan/

See also:

Doing Updates

Updating gitlab for ucsc-ci.com:

  • EC2 Instance is "allspark"

  • AWS account: platform-dev

  • Region: us-west-2

  • SSH Key is an AWS secret (TODO: which?) or your SSH key might be authorized

To connect (note that the server IP has probably changes; check the AWS EC2 console!):

ssh -i "allspark.pem" ubuntu@ec2-34-220-182-252.us-west-2.compute.amazonaws.com

Or:

ssh ubuntu@ucsc-ci.com

Then run:

sudo apt-get update

If the GPG Key is out of date, you may have to refresh it before running sudo apt-get update:

curl -s https://packages.gitlab.com/gpg.key | sudo apt-key add -

Don't run sudo apt-get upgrade to update gitlab.

Gitlab may require multiple installs to upgrade.

Follow the upgrade paths here:

https://docs.gitlab.com/ee/update/index.html#upgrade-paths

To see gitlab-ee versions:

sudo apt-cache policy gitlab-ee | less

Double-check the versions. Do not try to downgrade Gitlab.

Install each version in the order described by the upgrade path and then update the server. For example:

sudo apt-get install gitlab-ee=13.0.14-ee.0
sudo gitlab-ctl reconfigure && sudo gitlab-ctl restart
sudo gitlab-ctl restart redis
sudo apt-get install gitlab-ee=13.1.11-ee.0
sudo gitlab-ctl reconfigure && sudo gitlab-ctl restart
sudo gitlab-ctl restart redis
sudo apt-get install gitlab-ee=13.8.4-ee.0
sudo gitlab-ctl reconfigure && sudo gitlab-ctl restart
sudo gitlab-ctl restart redis

When Things Go Wrong

Regaining Access to the Server

If you cannot find the shared SSH private key, and your own SSH public key is not authorized on the server, you can use "EC2 Instance Connect" in the AWS console to get a shell on the machine to recover from.

Gitlab Downgrades

If you manage to type the wrong version of Gitlab to install, you might see this:

The following packages will be DOWNGRADED:
  gitlab-ee

You should stop and install the right version instead.

Partial or Corrupted Installations

If you abort Gitlab deb package installation in the middle, you may need to sudo dpkg --configure -a to get dpkg back into a good state.

After that, you can install Gitlab versions as normal, but you might see things like:

dpkg: warning: unable to delete old directory '/opt/gitlab/embedded/service/gitlab-rails/public/assets/sql.js/gh-pages/documentation/stylesheets': Directory not empty

Having the wrong stuff in /opt/gitlab/embedded can break the Gitlab version migration scripts, which do things like install gems in there.

You can replace the files in there with:

sudo mv /opt/gitlab/embedded /opt/gitlab/embedded.bak            
sudo apt install --reinstall gitlab-ee

You might need to disable the Gitlab automatic backup and version check logic if the existing Gitlab installation is sufficiently broken, in order to let the package install. This is very dangerous if you have not already made a backup!. You can do this by creating files /etc/gitlab/skip-auto-backup and /etc/gitlab/skip-unmigrated-data-check. Remember to remove them when you are done.

Upgrades Failing due to Stopped Services

Do not stop the gitlab-runsvdir systemctl service. The service has to be running for the upgrade process to work. This is because it is responsible for running the database server, and the upgrade scripts really want to talk to the database.

If you get a failure like:

* Mixlib::ShellOut::ShellCommandFailed occurred in delayed notification: runit_service[gitlab-kas] (gitlab-kas::enable line 122) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas ----
STDOUT: fail: /opt/gitlab/service/gitlab-kas: runsv not running
STDERR: 
---- End output of /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas ----
Ran /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas returned 1

(Note the runsv not running)

Then you need to start the service:

sudo systemctl start gitlab-runsvdir

Then make sure it is actually running in sudo systemctl status.

Debugging Failed Migrations

The Gitlab package installation or reconfiguration scripts might fail like this:

* Mixlib::ShellOut::ShellCommandFailed occurred in Chef Infra Client run: rails_migration[gitlab-rails] (gitlab::database_migrations line 51) had an error: Mixlib::ShellOut::ShellCommandFailed: bash[migrate gitlab-rails database] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/resources/rails_migration.rb line 16) had an error: Mixlib::ShellOut::ShellCommandFailed: Command execution failed. STDOUT/STDERR suppressed for sensitive resource

It's not clear how to make the logs be revealed to you, but you can run the Rails migrations separately and see what they say:

sudo gitlab-rake db:migrate

If the result is complaints about NoMethodErrors or other things that should never happen, in Ruby code in /opt/gitlab/embedded, then you might need to fix a partial or corrupted Gitlab installation.

Checking on the Server

If you want to watch the server logs as they happen, you can run:

sudo gitlab-ctl tail