Geo validation tests (PREMIUM SELF)
The Geo team performs manual testing and validation on common deployment configurations to ensure that Geo works when upgrading between minor GitLab versions and major PostgreSQL database versions.
This section contains a journal of recent validation tests and links to the relevant issues.
GitLab upgrades
The following are GitLab upgrade validation tests we performed.
July 2020
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.10.12 to 13.0.10 package in a multi-node configuration. As part of the issue to Fix zero-downtime upgrade process/instructions for multi-node Geo deployments, we monitored for downtime using the looping pipeline, HAProxy stats dashboards, and a script to log readiness status on both nodes.
- Outcome: Partial success because we observed downtime during the upgrade of the primary and secondary sites.
- Follow up issues/actions:
Switch from repmgr to Patroni on a Geo primary site:
- Description: Tested switching from repmgr to Patroni on a multi-node Geo primary site. Used the orchestrator tool to deploy a Geo installation with 3 database nodes managed by repmgr. With this approach, we were also able to address a related issue for verifying a Geo installation with Patroni and PostgreSQL 11.
- Outcome: Partial success. We enabled Patroni on the primary site and set up database replication on the secondary site. However, we found that Patroni would delete the secondary site's replication slot whenever Patroni was restarted. Another issue is that when Patroni elects a new leader in the cluster, the secondary site fails to automatically follow the new leader. Until these issues are resolved, we cannot officially support and recommend Patroni for Geo installations.
- Follow up issues/actions:
June 2020
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.9.10 to 12.10.12 package in a multi-node configuration. Monitored for downtime using the looping pipeline and HAProxy stats dashboards.
- Outcome: Partial success because we observed downtime during the upgrade of the primary and secondary sites.
- Follow up issues/actions:
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.8.1 to 12.9.10 package in a multi-node configuration.
- Outcome: Partial success because we did not run the looping pipeline during the demo to validate zero-downtime.
- Follow up issues:
- Clarify how Puma should include deploy node
- Investigate MR creation failure after upgrade to 12.9.10 Closed as false positive.
February 2020
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.7.5 to the latest GitLab 12.8 package in a multi-node configuration.
- Outcome: Partial success because we did not run the looping pipeline during the demo to monitor downtime.
January 2020
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.6.x to the latest GitLab 12.7 package in a multi-node configuration.
- Outcome: Upgrade test was successful.
- Follow up issues:
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.5.7 to GitLab 12.6.6 in a multi-node configuration.
- Outcome: Upgrade test was successful.
- Follow up issue: Update documentation for zero-downtime upgrades to ensure deploy node it not in use.
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.4.x to the latest GitLab 12.5 package in a multi-node configuration.
- Outcome: Upgrade test was successful.
- Follow up issues:
October 2019
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.3.5 to GitLab 12.4.1 in a multi-node configuration.
- Outcome: Upgrade test was successful.
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.2.8 to GitLab 12.3.5.
- Outcome: Upgrade test was successful.
Upgrade Geo multi-node installation:
- Description: Tested upgrading from GitLab 12.1.9 to GitLab 12.2.8.
- Outcome: Partial success due to possible misconfiguration issues.
PostgreSQL upgrades
The following are PostgreSQL upgrade validation tests we performed.
September 2021
Verify Geo installation with PostgreSQL 13:
- Description: With PostgreSQL 13 available as an opt-in version in GitLab 14.1, we tested fresh installations of GitLab with Geo when PostgreSQL 13 is enabled.
- Outcome: Successfully built an environment with Geo and PostgreSQL 13 using GitLab Environment Toolkit and performed Geo QA tests against the environment without failures.
September 2020
Verify PostgreSQL 12 upgrade for Geo installations:
- Description: With PostgreSQL 12 available as an opt-in version in GitLab 13.3, we tested upgrading existing Geo installations from PostgreSQL 11 to 12. We also re-tested fresh installations of GitLab with Geo after fixes were made to support PostgreSQL 12. These tests were done using a nightly build of GitLab 13.4.
- Outcome: Tests were successful for Geo deployments with a single database node on the primary and secondary. We encountered known issues with repmgr and Patroni managed PostgreSQL clusters on the Geo primary. Using PostgreSQL 12 with a database cluster on the primary is not recommended until the issues are resolved.
- Known issues for PostgreSQL clusters:
August 2020
Verify Geo installation with PostgreSQL 12:
- Description: Prior to PostgreSQL 12 becoming available as an opt-in version in GitLab 13.3, we tested fresh installations of GitLab 13.3 with PostgreSQL 12 enabled and Geo installed.
- Outcome: Setting up a Geo secondary required manual intervention because the
recovery.conf
file is no longer supported in PostgreSQL 12. We do not recommend deploying Geo with PostgreSQL 12 until the appropriate changes have been made to Omnibus and verified. - Follow up issues:
April 2020
PostgreSQL 11 upgrade procedure for Geo installations:
- Description: Prior to making PostgreSQL 11 the default version of PostgreSQL in GitLab 12.10, we tested upgrading to PostgreSQL 11 in Geo deployments in GitLab 12.9.
- Outcome: Partially successful. Issues were discovered in multi-node configurations with a separate tracking database and concerns were raised about allowing automatic upgrades when Geo enabled.
- Follow up issues:
-
replicate-geo-database
incorrectly tries to back up repositories. -
pg-upgrade
fails to upgrade a standalone Geo tracking database. -
revert-pg-upgrade
fails to downgrade the PostgreSQL data of a Geo secondary's standalone tracking database. -
Timeout error on Geo secondary read-replica near the end of
gitlab-ctl pg-upgrade
.
-
Verify Geo installation with PostgreSQL 11:
- Description: Prior to making PostgreSQL 11 the default version of PostgreSQL in GitLab 12.10, we tested fresh installations of GitLab 12.9 with Geo installed with PostgreSQL 11.
- Outcome: Installation test was successful.
September 2019
Test and validate PostgreSQL 10.0 upgrade for Geo:
- Description: With the 12.0 release, GitLab required an upgrade to PostgreSQL 10.0. We tested various upgrade scenarios up to GitLab 12.1.8.
- Outcome: Multiple issues were found when upgrading and addressed in follow-up issues.
- Follow up issues:
Object storage replication tests
The following are additional validation tests we performed.
May 2021
Test failover with object storage replication enabled:
- Description: At the time of testing, Geo's object storage replication functionality was in beta. We tested that object storage replication works as intended and that the data was present on the new primary after a failover.
- Outcome: The test was successful. Data in object storage was replicated and present after a failover.
- Follow up issues:
January 2022
Validate Object storage replication using Azure based object storage:
- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using Azure based object storage replication and GitLab based object storage replication. This was tested by uploading a 1 MB image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a Ruby Script.
- Outcome: When using Azure based replication the average time for an image to replicate from the primary object storage to the secondary was recorded as 40 seconds, the longest replication time was 70 seconds and the quickest was 11 seconds. When using GitLab based replication the average time for replication to complete was 5 seconds, the longest replication time was 10 seconds and the quickest was 3 seconds.
- Follow up issue:
April 2022
Validate Object storage replication using AWS based object storage:
- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using AWS based object storage replication and GitLab based object storage replication. This was tested by uploading a 1 MB image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a Ruby Script.
- Outcome: When using AWS managed replication the average time for an image to replicate between sites is about 49 seconds, this is true for when sites are located in the same region and when they are further apart (Europe to America). When using Geo managed replication in the same region the average time for replication took just 5 seconds, however when replicating cross region the average time rose to 33 seconds.
Validate Object storage replication using GCP based object storage:
- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using GCP based object storage replication and GitLab based object storage replication. This was tested by uploading a 1 MB image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a Ruby Script.
- Outcome: GCP handles replication differently than other Cloud Providers. In GCP, the process is to a create single bucket that is either multi, dual, or single region based. This means that the bucket automatically stores replicas in a region based on the option chosen. Even when using multi region, this only replicates in a single continent, the options being America, Europe, or Asia. At current there doesn't seem to be any way to replicate objects between continents using GCP based replication. For Geo managed replication the average time when replicating in the same region was 6 seconds, and when replicating cross region this rose to just 9 seconds.
Other tests
August 2020
Test Gitaly Cluster on a Geo Deployment:
- Description: Tested a Geo deployment with Gitaly clusters configured on both the primary and secondary Geo sites. Triggered automatic Gitaly cluster failover on the primary Geo site, and ran end-to-end Geo tests. Then triggered Gitaly cluster failover on the secondary Geo site, and re-ran the end-to-end Geo tests.
- Outcome: Successful end-to-end tests before and after Gitaly cluster failover on the primary site, and before and after Gitaly cluster failover on the secondary site.