You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
intro: 'As part of a disaster recovery plan, you can protect production data on {% data variables.location.product_location %} by configuring automated backups.'
18
19
versions:
19
20
ghes: '*'
@@ -94,7 +95,7 @@ Backup snapshots are written to the disk path set by the `GHE_DATA_DIR` data dir
94
95
95
96
**Note:** If {% data variables.location.product_location %} is deployed as a cluster or in a high availability configuration using a load balancer, the `GHE_HOSTNAME` can be the load balancer hostname, as long as it allows SSH access (on port 122) to {% data variables.location.product_location %}.
96
97
97
-
To ensure a recovered appliance is immediately available, perform backups targeting the primary instance even in a geo-replication configuration.
98
+
To ensure a recovered instance is immediately available, perform backups targeting the primary instance even in a geo-replication configuration.
98
99
99
100
{% endnote %}
100
101
1. Set the `GHE_DATA_DIR` value to the filesystem location where you want to store backup snapshots. We recommend choosing a location on the same filesystem as your backup host, but outside of where you cloned the Git repository in step 1.
@@ -203,34 +204,50 @@ To use Git instead of a compressed archive for upgrades, you must back up your e
203
204
204
205
## Scheduling a backup
205
206
207
+
{% ifversion backup-utilities-encryption-bug %}
208
+
{% warning %}
209
+
210
+
**Warning**: {% data reusables.enterprise_backup_utilities.enterprise-backup-utils-encryption-keys %}
211
+
212
+
{% endwarning %}
213
+
{% endif %}
214
+
206
215
You can schedule regular backups on the backup host using the `cron(8)` command or a similar command scheduling service. The configured backup frequency will dictate the worst case recovery point objective (RPO) in your recovery plan. For example, if you have scheduled the backup to run every day at midnight, you could lose up to 24 hours of data in a disaster scenario. We recommend starting with an hourly backup schedule, guaranteeing a worst case maximum of one hour of data loss if the primary site data is destroyed.
207
216
208
217
If backup attempts overlap, the `ghe-backup` command will abort with an error message, indicating the existence of a simultaneous backup. If this occurs, we recommended decreasing the frequency of your scheduled backups. For more information, see the "Scheduling backups" section of the [{% data variables.product.prodname_enterprise_backup_utilities %} README](https://github.com/github/backup-utils#scheduling-backups) in the {% data variables.product.prodname_enterprise_backup_utilities %} project documentation.
209
218
210
219
## Restoring a backup
211
220
212
-
In the event of prolonged outage or catastrophic event at the primary site, you can restore {% data variables.location.product_location %} by provisioning another {% data variables.product.prodname_enterprise %} appliance and performing a restore from the backup host. You must add the backup host's SSH key to the target {% data variables.product.prodname_enterprise %} appliance as an authorized SSH key before restoring an appliance.
221
+
{% ifversion backup-utilities-encryption-bug %}
213
222
214
-
{% note %}
223
+
{% warning %}
215
224
216
-
**Note:** When performing backup restores to {% data variables.location.product_location %}, the same version supportability rules apply. You can only restore data from at most two feature releases behind.
225
+
**Warning**: {% data reusables.enterprise_backup_utilities.enterprise-backup-utils-encryption-keys %}
217
226
218
-
For example, if you take a backup from {% data variables.product.product_name %} 3.0.x, you can restore the backup to a {% data variables.product.product_name %} 3.2.x instance. You cannot restore data from a backup of {% data variables.product.product_name %} 2.22.x to an instance running 3.2.x, because that would be three jumps between versions (2.22 to 3.0 to 3.1 to 3.2). You would first need to restore to an instance running 3.1.x, and then upgrade to 3.2.x.
227
+
{% endwarning %}
219
228
220
-
{% endnote %}
229
+
{% endif %}
221
230
222
-
To restore {% data variables.location.product_location %} from the last successful snapshot, use the `ghe-restore` command.
231
+
In the event of prolonged outage or catastrophic event at the primary site, you can restore {% data variables.location.product_location %} by provisioning another instance and performing a restore from the backup host. You must add the backup host's SSH key to the target {% data variables.product.prodname_enterprise %} instance as an authorized SSH key before restoring an instance.
223
232
224
-
{% note %}
233
+
When performing backup restores to {% data variables.location.product_location %}, you can only restore data from at most two feature releases behind. For example, if you take a backup from {% data variables.product.product_name %} 3.0.x, you can restore the backup to an instance running {% data variables.product.product_name %} 3.2.x. You cannot restore data from a backup of {% data variables.product.product_name %} 2.22.x to an instance running 3.2.x, because that would be three jumps between versions (2.22 to 3.0 to 3.1 to 3.2). You would first need to restore to an instance running 3.1.x, and then upgrade to 3.2.x.
225
234
226
-
**Note:** Prior to restoring a backup, ensure:
227
-
- Maintenance mode is enabled on the primary instance and all active processes have completed. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
228
-
- Replication is stopped on all replicas in high availability configurations. For more information, see the `ghe-repl-stop` command in "[AUTOTITLE](/admin/enterprise-management/configuring-high-availability/about-high-availability-configuration#ghe-repl-stop)."
229
-
- If {% data variables.location.product_location %} has {% data variables.product.prodname_actions %} enabled, you must first configure the {% data variables.product.prodname_actions %} external storage provider on the replacement appliance. For more information, see "[AUTOTITLE](/admin/github-actions/advanced-configuration-and-troubleshooting/backing-up-and-restoring-github-enterprise-server-with-github-actions-enabled)."
235
+
Network settings are excluded from the backup snapshot. After restoration, you must manually configure networking on the target {% data variables.product.prodname_ghe_server %} instance.
230
236
231
-
{% endnote %}
237
+
### Prerequisites
238
+
239
+
1. Ensure maintenance mode is enabled on the primary instance and all active processes have completed. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
240
+
1. Stop replication on all replica nodes in a high-availability configuration. For more information, see "[AUTOTITLE](/admin/enterprise-management/configuring-high-availability/about-high-availability-configuration#ghe-repl-stop)."
241
+
1. If {% data variables.location.product_location %} has {% data variables.product.prodname_actions %} enabled, you must configure the external storage provider for {% data variables.product.prodname_actions %} on the replacement instance. For more information, see "[AUTOTITLE](/admin/github-actions/advanced-configuration-and-troubleshooting/backing-up-and-restoring-github-enterprise-server-with-github-actions-enabled)."
242
+
243
+
### Starting the restore operation
244
+
245
+
To restore {% data variables.location.product_location %} from your backup host using the last successful snapshot, use the `ghe-restore` command. You can use the following additional options with `ghe-restore`.
232
246
233
-
When running the `ghe-restore` command, you should see output similar to this:
247
+
- The `-c` flag overwrites the settings, certificate, and license data on the target host even if it is already configured. Omit this flag if you are setting up a staging instance for testing purposes and you wish to retain the existing configuration on the target. For more information, see the "Using backup and restore commands" section of the [{% data variables.product.prodname_enterprise_backup_utilities %} README](https://github.com/github/backup-utils#using-the-backup-and-restore-commands) in the github/backup-utils repository.
248
+
- The `-s` flag allows you to select a different backup snapshot.
249
+
250
+
After you run `ghe-restore`, the command confirms the restoration, then outputs details and status during the operation.
Optionally, to validate the restore, configure an IP exception list to allow access to a specified list of IP addresses. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode#validating-changes-in-maintenance-mode-using-the-ip-exception-list)."
254
271
{% endif %}
255
272
256
-
{% note %}
273
+
On an instance in a high-availability configuration, after you restore to new disks on an existing or empty instance, `ghe-repl-status` may report that Git or Alambic replication is out of sync due to stale server UUIDs. These stale UUIDs can be the result of a retired node in a high-availability configuration still being present in the application database, but not in the restored replication configuration.
257
274
258
-
**Note:**
275
+
To remediate after the restoration completes and before starting replication, you can tear down stale UUIDs using `ghe-repl-teardown`. If you need further assistance, contact {% data variables.contact.contact_ent_support %}.
259
276
260
-
- The network settings are excluded from the backup snapshot. You must manually configure the network on the target {% data variables.product.prodname_ghe_server %} appliance as required for your environment.
277
+
{% ifversion backup-utilities-progress %}
278
+
## Monitoring backup or restoration progress
261
279
262
-
- When restoring to new disks on an existing or empty {% data variables.product.prodname_ghe_server %} instance, stale UUIDs may be present, resulting in Git and/or Alambic replication reporting as out of sync. Stale server entry IDs can be the result of a retired node in a high availability configuration still being present in the application database, but not in the restored replication configuration. To remediate, stale UUIDs can be torn down using `ghe-repl-teardown` once the restore has completed and prior to starting replication. In this scenario, contact {% data variables.contact.contact_ent_support %} for further assistance.
280
+
During a backup or restoration operation, you can use the `ghe-backup-progress` utility on your backup host to monitor the operation's progress. The utility prints the progress of each job sequentially.
263
281
264
-
{% endnote %}
282
+
To monitor progress on the backup host, from the directory containing {% data variables.product.prodname_enterprise_backup_utilities %}, run the following command.
265
283
266
-
You can use these additional options with `ghe-restore` command:
267
-
- The `-c` flag overwrites the settings, certificate, and license data on the target host even if it is already configured. Omit this flag if you are setting up a staging instance fortesting purposes and you wish to retain the existing configuration on the target. For more information, see the "Using backup and restore commands" section of the [{% data variables.product.prodname_enterprise_backup_utilities %} README](https://github.com/github/backup-utils#using-the-backup-and-restore-commands)in the {% data variables.product.prodname_enterprise_backup_utilities %} project documentation.
268
-
- The `-s` flag allows you to selecta different backup snapshot.
284
+
```shell copy
285
+
bin/ghe-backup-progress
286
+
```
287
+
288
+
By default, the utility prints progress continuously until the operation is complete. You can press any key to return to the prompt.
289
+
290
+
Optionally, you can run the following command to print the current progress, the last completed job, and then immediately exit.
0 commit comments