diff --git a/docs/common/ca.md b/docs/common/ca.md index 68f439050..c854f2042 100644 --- a/docs/common/ca.md +++ b/docs/common/ca.md @@ -219,9 +219,7 @@ To modify the times that `fetch-crl-cron` runs, edit `/etc/cron.d/fetch-crl`. | Software | Service name | Notes | |:----------|:-----------------------------|:-------------------------------------------| -| Fetch CRL | `fetch-crl.timer` (EL8-only) | Runs `fetch-crl` every 6 hours and on boot | -| | `fetch-crl-cron` (EL7-only) | Runs `fetch-crl` every 6 hours | -| | `fetch-crl-boot` (EL7-only) | Runs `fetch-crl` immediately and on boot | +| Fetch CRL | `fetch-crl.timer` (EL8+) | Runs `fetch-crl` every 6 hours and on boot | Start the services in the order listed and stop them in reverse order. As a reminder, here are common service commands (all run as `root`): diff --git a/docs/common/help.md b/docs/common/help.md index 913565af8..2d59a4214 100644 --- a/docs/common/help.md +++ b/docs/common/help.md @@ -20,8 +20,8 @@ support inquiry: * Troubleshooting sections or pages for the problematic software * Recent OSG Software release notes + - [OSG 24](../release/osg-24.md) - [OSG 23](../release/osg-23.md) - - [OSG 3.6](../release/osg-36.md) * [Outage](https://status.opensciencegrid.org/) information for OSG services ### Submitting support inquiries ### diff --git a/docs/common/yum.md b/docs/common/yum.md index e8994e67f..c5c0a3267 100644 --- a/docs/common/yum.md +++ b/docs/common/yum.md @@ -28,8 +28,6 @@ OSG's RPM packages also rely on external packages provided by supported OSes and You must have the following repositories available and enabled: - OS repositories, including the following ones that aren't enabled by default: - - `extras` (SL 7, CentOS 7, CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) - - `Server-Extras` (RHEL 7) - `powertools` (CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) - `CodeReady Builder` (RHEL 8) or `crb` (all EL9 variants) - EPEL repositories @@ -52,8 +50,8 @@ Their names start with `osg-upcoming` and have the same structure as our standar as well as the same guarantees of quality and production-readiness. There are separate sets of upcoming repositories for each release series. -For example, the [OSG 23 repos](https://repo.opensciencegrid.org/osg/23-main/) have corresponding -[23-upcoming repos](https://repo.opensciencegrid.org/osg/23-upcoming/). +For example, the [OSG 24 repos](https://repo.osg-htc.org/osg/24-main/) have corresponding +[24-upcoming repos](https://repo.osg-htc.org/osg/24-upcoming/). The upcoming repositories are meant to be layered on top of our standard repositories: installing software from the upcoming repositories requires also enabling the standard repositories from the same release. @@ -67,8 +65,10 @@ supported by the OSG. The definitive list of software in the contrib repository can be found here: -- [OSG 23 EL8 contrib software repository](https://repo.opensciencegrid.org/osg/23-contrib/el8/x86_64/) +- [OSG 24 EL9 contrib software repository](https://repo.opensciencegrid.org/osg/24-contrib/el9/x86_64/) +- [OSG 24 EL8 contrib software repository](https://repo.opensciencegrid.org/osg/24-contrib/el8/x86_64/) - [OSG 23 EL9 contrib software repository](https://repo.opensciencegrid.org/osg/23-contrib/el9/x86_64/) +- [OSG 23 EL8 contrib software repository](https://repo.opensciencegrid.org/osg/23-contrib/el8/x86_64/) If you would like to distribute your software in the OSG `contrib` repository, please [contact us](../common/help.md) with a description of your software, what users it serves, and relevant RPM packaging. @@ -76,24 +76,6 @@ description of your software, what users it serves, and relevant RPM packaging. Installing Yum Repositories --------------------------- -### Install the Yum priorities plugin (EL7) - -The Yum priorities plugin is used to tell Yum to prefer OSG packages over EPEL or OS packages. -It is important to install and enable the Yum priorities plugin before installing OSG Software to ensure that you are -getting the OSG-supported versions. - -This plugin is built into Yum on EL8 and EL9 distributions. - -1. Install the Yum priorities package: - - :::console - root@host # yum install yum-plugin-priorities - -1. Ensure that `/etc/yum.conf` has the following line in the `[main]` section: - - :::file - plugins=1 - ### Enable additional OS repositories Some packages depend on packages that are in OS repositories not enabled by default. @@ -104,15 +86,6 @@ The repositories to enable, as well as the instructions to enable them, are OS-d or if the `enabled` line is missing (i.e. it is enabled unless specified otherwise.) -#### SL 7 - -- Install the `yum-conf-extras` RPM package. -- Ensure that the `sl-extras` repo in `/etc/yum.repos.d/sl-extras.repo` is enabled. - -#### CentOS 7 - -- Ensure that the `extras` repo in `/etc/yum.repos.d/CentOS-Base.repo` is enabled. - #### CentOS Stream 8 - Ensure that the `extras` repo in `/etc/yum.repos.d/CentOS-Stream-Extras.repo` is enabled. @@ -128,10 +101,6 @@ The repositories to enable, as well as the instructions to enable them, are OS-d - Ensure that the `extras` repo in `/etc/yum.repos.d/almalinux.repo` is enabled. - Ensure that the `powertools` repo in `/etc/yum.repos.d/almalinux-powertools.repo` is enabled. -#### RHEL 7 - -- Ensure that the `Server-Extras` channel is enabled. - #### RHEL 8 - Ensure that the `CodeReady Linux Builder` channel is enabled. @@ -157,8 +126,6 @@ You must install and enable these first. - Install the EPEL repository, if not already present. Choose the right version to match your OS version. :::console - ## EPEL 7 (For RHEL 7, CentOS 7, and SL 7) - root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm ## EPEL 8 (For RHEL 8 and CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm ## EPEL 9 (For RHEL 9 and CentOS Stream 9, Rocky Linux 9, AlmaLinux 9) @@ -185,16 +152,26 @@ For instructions on upgrading from one OSG series to another, see the 1. Install the OSG repository for your OS version and the [OSG release series](../release/release_series.md) that you wish to use: - - OSG 23 EL8: + - OSG 24 EL9: :::console - root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el8-release-latest.rpm + root@host # yum install https://repo.opensciencegrid.org/osg/24-main/osg-24-main-el9-release-latest.rpm + + - OSG 24 EL8: + + :::console + root@host # yum install https://repo.opensciencegrid.org/osg/24-main/osg-24-main-el8-release-latest.rpm - OSG 23 EL9: :::console root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el9-release-latest.rpm + - OSG 23 EL8: + + :::console + root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el8-release-latest.rpm + 1. The only OSG repository enabled by default is the release one. If you want to [enable another one](#repositories) (e.g. `osg-testing`), then edit its file @@ -202,15 +179,14 @@ For instructions on upgrading from one OSG series to another, see the :::file hl_lines="7" [osg-testing] - name=OSG Software for Enterprise Linux 7 - Testing - $basearch - #baseurl=https://repo.opensciencegrid.org/osg/23-main/el8/testing/$basearch - mirrorlist=https://repo.opensciencegrid.org/osg/23-main/el8/testing/$basearch + name=OSG Software for Enterprise Linux 9 - Testing - $basearch + #baseurl=https://repo.opensciencegrid.org/osg/24-main/el9/testing/$basearch + mirrorlist=https://repo.opensciencegrid.org/osg/24-main/el9/testing/$basearch failovermethod=priority priority=98 enabled=1 gpgcheck=1 - gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG - file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-2 + gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-24-developer Optional Configuration ---------------------- @@ -226,8 +202,6 @@ Therefore we recommend security-only automatic updates or disabling automatic up To enable only security related automatic updates: -- On EL 7 variants, edit `/etc/yum/yum-cron.conf` and set `update_cmd = security` - - On EL8 and EL9 variants, edit `/etc/dnf/automatic.conf` and set `upgrade_type = security` CentOS 7, CentOS Stream 8, and CentOS Stream 9 do not support security-only automatic updates; @@ -235,11 +209,6 @@ doing any of the above steps will prevent automatic updates from happening at al To disable automatic updates entirely: -- On EL7 variants, run: - - :::console - root@host # service yum-cron stop - - On EL8 and EL9 variants, run: :::console @@ -270,10 +239,10 @@ Add the following to a file in `/etc/cron.d`: Or, to mirror only a single repository: :::file - * * * * root rsync -aH rsync://repo-rsync.opensciencegrid.org/osg//el9/development /var/www/html/osg//el7 + * * * * root rsync -aH rsync://repo-rsync.opensciencegrid.org/osg//el9/development /var/www/html/osg//el9 -Replace `` with the OSG release you would like to use (e.g. `23-main`) and `` with a number between 0 +Replace `` with the OSG release you would like to use (e.g. `24-main`) and `` with a number between 0 and 59. On your worker node, you can replace the `baseurl` line of `/etc/yum.repos.d/osg.repo` with the appropriate URL for your diff --git a/docs/compute-element/install-htcondor-ce.md b/docs/compute-element/install-htcondor-ce.md index 7650ddcae..f312de5b1 100644 --- a/docs/compute-element/install-htcondor-ce.md +++ b/docs/compute-element/install-htcondor-ce.md @@ -23,7 +23,7 @@ Before Starting --------------- Before starting the installation process, consider the following points, consulting the upstream references as needed -([HTCondor-CE 23](https://htcondor.com/htcondor-ce/v23/reference/)): +([HTCondor-CE 24](https://htcondor.com/htcondor-ce/v24/reference/)): - **User IDs:** If they do not exist already, the installation will create the Linux users `condor` (UID 4716) and `gratia` @@ -43,7 +43,7 @@ Before starting the installation process, consider the following points, consult - **Access point/login node:** HTCondor-CE should be installed on a host that already has the ability to submit jobs into your local cluster - **File Systems**: Non-HTCondor batch systems require a - [shared file system](https://htcondor.com/htcondor-ce/v23/configuration/local-batch-system/#sharing-the-spool-directory) + [shared file system](https://htcondor.com/htcondor-ce/v24/configuration/local-batch-system/#sharing-the-spool-directory) between the HTCondor-CE host and the batch system worker nodes. As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: @@ -113,7 +113,7 @@ For more advanced configuration, see the section on [optional configurations](#o ### Configuring the local batch system ### To configure HTCondor-CE to integrate with your local batch system, -please refer to the [upstream documentation](https://htcondor.com/htcondor-ce/v23/configuration/local-batch-system/). +please refer to the [upstream documentation](https://htcondor.com/htcondor-ce/v24/configuration/local-batch-system/). ### Configuring authentication ### @@ -137,7 +137,7 @@ To accept RARs from a particular collaboration: SCITOKENS /^https\:\/\/scitokens\.org\/osg\-connect,/ osgpilot For more details of the mapfile format, consult the "SciTokens" section of the -[upstream documentation](https://htcondor.com/htcondor-ce/v23/configuration/authentication/#scitokens). +[upstream documentation](https://htcondor.com/htcondor-ce/v24/configuration/authentication/#scitokens). #### Bannning a collaboration @@ -188,8 +188,8 @@ In addition to the configurations above, you may need to further configure how p before they are submitted to your local batch system or otherwise change the behavior of your CE. For detailed instructions, please refer to the upstream documentation: -- [Configuring the Job Router](https://htcondor.com/htcondor-ce/v23/configuration/job-router-overview/) -- [Optional configuration](https://htcondor.com/htcondor-ce/v23/configuration/optional-configuration/) +- [Configuring the Job Router](https://htcondor.com/htcondor-ce/v24/configuration/job-router-overview/) +- [Optional configuration](https://htcondor.com/htcondor-ce/v24/configuration/optional-configuration/) #### Accounting with multiple CEs or local user jobs @@ -208,16 +208,16 @@ Starting and Validating HTCondor-CE ----------------------------------- For information on how to start and validate the core HTCondor-CE services, please refer to the -[upstream documentation](https://htcondor.com/htcondor-ce/v23/operation/) +[upstream documentation](https://htcondor.com/htcondor-ce/v24/operation/) Troubleshooting HTCondor-CE --------------------------- For information on how to troubleshoot your HTCondor-CE, please refer to the upstream documentation: -- [Common issues](https://htcondor.com/htcondor-ce/v23/troubleshooting/common-issues/) -- [Debugging tools](https://htcondor.com/htcondor-ce/v23/troubleshooting/debugging-tools/) -- [Helpful logs](https://htcondor.com/htcondor-ce/v23/troubleshooting/logs/) +- [Common issues](https://htcondor.com/htcondor-ce/v24/troubleshooting/common-issues/) +- [Debugging tools](https://htcondor.com/htcondor-ce/v24/troubleshooting/debugging-tools/) +- [Helpful logs](https://htcondor.com/htcondor-ce/v24/troubleshooting/logs/) Registering the CE ------------------ diff --git a/docs/data/external-oasis-repos.md b/docs/data/external-oasis-repos.md index 867a5881d..312a525c2 100644 --- a/docs/data/external-oasis-repos.md +++ b/docs/data/external-oasis-repos.md @@ -21,7 +21,7 @@ Before Starting The host OS must be: -- RHEL7 or RHEL8 (or equivalent). +- RHEL8 or RHEL9 (or equivalent). Additionally, @@ -36,13 +36,6 @@ Additionally, repository itself will be done as an unprivileged user. - **Yum** will need to be [configured to use the OSG repositories](../common/yum.md). -!!! warning "Overlay-FS limitations" - CVMFS on RHEL7 only supports Overlay-FS if the underlying filesystem is `ext3` or `ext4`; make sure - `/var/spool/cvmfs` is one of these filesystem types. - - If this is not possible, add `CVMFS_DONT_CHECK_OVERLAYFS_VERSION=yes` to your CVMFS configuration. Using - `xfs` will work if it was created with `ftype=1` - Installation ------------ diff --git a/docs/data/run-frontier-squid-container.md b/docs/data/run-frontier-squid-container.md index 77d404752..222c916a1 100644 --- a/docs/data/run-frontier-squid-container.md +++ b/docs/data/run-frontier-squid-container.md @@ -78,6 +78,10 @@ on configuration customization. Running a Frontier Squid Container ---------------------------------- +!!! info "Where is the OSG 24 container?" + We are actively reworking our image build infrastructure for OSG 24 and expect to have all OSG Software containers + available by the end of 2024. + To run a Frontier Squid container with the defaults: ```console diff --git a/docs/data/stashcache/install-cache.md b/docs/data/stashcache/install-cache.md index b133512d0..93e8442ed 100644 --- a/docs/data/stashcache/install-cache.md +++ b/docs/data/stashcache/install-cache.md @@ -247,19 +247,6 @@ To use HTTPS: 1. Uncomment `set EnableVoms = 1` in `/etc/xrootd/config.d/10-osg-xrdvoms.cfg` -!!! note "Upgrading from OSG 3.5" - If upgrading from OSG 3.5, you may have a file with the following contents in `/etc/xrootd/config.d`: - - # Support HTTPS access to unauthenticated cache - if named stash-cache - http.cadir /etc/grid-security/certificates - http.cert /etc/grid-security/xrd/xrdcert.pem - http.key /etc/grid-security/xrd/xrdkey.pem - http.secxtractor /usr/lib64/libXrdLcmaps.so - fi - - You must delete this config block or XRootD will fail to start. - Manually Setting the FQDN (optional) ------------------------------------ @@ -313,7 +300,7 @@ As a reminder, here are common service commands (all run as `root`): |--------------|------------------|-----------| | XRootD | `xrootd@stash-cache.service` | The XRootD daemon, which performs the data transfers | | XCache | `xcache-reporter.timer` | Reports usage information to collector.opensciencegrid.org | -| Fetch CRL |EL8: `fetch-crl.timer`
EL7: `fetch-crl-boot` and `fetch-crl-cron` | Required to authenticate monitoring services. See [CA documentation](../../common/ca.md#managing-fetch-crl-services) for more info | +| Fetch CRL | `fetch-crl.timer` | Required to authenticate monitoring services. See [CA documentation](../../common/ca.md#managing-fetch-crl-services) for more info | | | `stash-authfile@stash-cache.service` | Generate authentication configuration files for XRootD (public cache instance) | | | `stash-authfile@stash-cache.timer` | Periodically run the above service (public cache instance) | @@ -375,24 +362,6 @@ STASHCACHE_DaemonVersion = "1.0.0" ``` -Updating to OSG 3.6 -------------------- - -The OSG 3.5 series reached end-of-life on May 1, 2022. -Admins are strongly encouraged to move their caches to OSG 3.6. - -See [general update instructions](../../release/updating-to-osg-36.md). - -Unauthenticated caches (`xrootd@stash-cache` service) do not need any configuration changes, -unless HTTPS access has been enabled. -See the ["enable HTTPS on the unauthenticated cache" section](#enable-https-on-the-unauthenticated-cache)) -for the necessary configuration changes. - -Authenticated caches (`xrootd@stash-cache-auth` service) may need the configuration changes described in the -[updating to OSG 3.6 section](../xrootd/xrootd-authorization.md#updating-to-osg-36) -of the XRootD authorization configuration document. - - Getting Help ------------ diff --git a/docs/data/stashcache/install-origin.md b/docs/data/stashcache/install-origin.md index cbfd83c76..a28b76e60 100644 --- a/docs/data/stashcache/install-origin.md +++ b/docs/data/stashcache/install-origin.md @@ -34,7 +34,7 @@ Before Starting Before starting the installation process, consider the following requirements: -* __Operating system:__ A RHEL 7 or RHEL 8 or compatible operating systems. +* __Operating system:__ A RHEL 8 or RHEL 9 or compatible operating systems. * __User IDs:__ If they do not exist already, the installation will create the Linux user IDs `condor` and `xrootd`; only the `xrootd` user is utilized for the running daemons. * __Host certificate:__ Required for authentication. @@ -328,19 +328,6 @@ See the page on [getting your VO's data into OSDF](vo-data.md). Specifying the DN of your origin is not required but it is useful for testing. -Updating to OSG 3.6 -------------------- - -The OSG 3.5 series reached end-of-life on May 1, 2022. -Admins are strongly encouraged to move their origins to OSG 3.6. - -See [general update instructions](../../release/updating-to-osg-36.md). - -Unauthenticated origins (`xrootd@stash-origin` service) do not need any configuration changes. - -Authenticated origins (`xrootd@stash-origin-auth` service) may need the configuration changes described in the -[updating to OSG 3.6 section](../xrootd/xrootd-authorization.md#updating-to-osg-36) -of the XRootD authorization configuration document. Getting Help ------------ diff --git a/docs/data/xrootd/install-cms-xcache.md b/docs/data/xrootd/install-cms-xcache.md index a6fa54a6d..69062f3a2 100644 --- a/docs/data/xrootd/install-cms-xcache.md +++ b/docs/data/xrootd/install-cms-xcache.md @@ -13,7 +13,7 @@ Before Starting Before starting the installation process, consider the following requirements: -* __Operating system:__ A RHEL 7 or compatible operating systems. +* __Operating system:__ An RHEL 8, RHEL 9, or compatible operating systems. * __User IDs:__ If they do not exist already, the installation will create the Linux user IDs `xrootd` * __Host certificate:__ Required for client authentication and authentication with CMS VOMS Server See our [documentation](../../security/host-certs.md) for instructions on how to request and install host certificates. @@ -191,9 +191,9 @@ Managing CMS XCache and associated services ------------------------------------------- These services must be managed by `systemctl` and may start additional services as dependencies. -As a reminder, here are common service commands (all run as `root`) for EL7: +As a reminder, here are common service commands (all run as `root`) for EL8+: -| To... | On EL7, run the command... | +| To... | On EL8+, run the command... | | :-------------------------------------- | :--------------------------------- | | Start a service | `systemctl start ` | | Stop a service | `systemctl stop ` | @@ -206,7 +206,7 @@ As a reminder, here are common service commands (all run as `root`) for EL7: |--------------|------------------|-----------| | XRootD | `xrootd@cms-xcache.service` | The XRootD daemon, which performs the data transfers | | XRootD (Optional)| `cmsd@cms-xcache.service` | The cmsd daemon that interact with the different xrootd servers | -| Fetch CRL | EL8: `fetch-crl.timer`
EL7: `fetch-crl-boot` and `fetch-crl-cron` | Required to authenticate monitoring services. See [CA documentation](../../common/ca.md#managing-fetch-crl-services) for more info | +| Fetch CRL | `fetch-crl.timer` | Required to authenticate monitoring services. See [CA documentation](../../common/ca.md#managing-fetch-crl-services) for more info | | |`xrootd-renew-proxy.service` | Renew a proxy for downloads to the cache | | | `xrootd-renew-proxy.timer` | Trigger daily proxy renewal | diff --git a/docs/data/xrootd/install-standalone.md b/docs/data/xrootd/install-standalone.md index 6c435adfe..2e640b730 100644 --- a/docs/data/xrootd/install-standalone.md +++ b/docs/data/xrootd/install-standalone.md @@ -173,7 +173,7 @@ The specific services are: | Software | Service Name | Notes | |:------------------|:--------------------------------------|:------------------------------------------------------------------------------------------------------------| -| Fetch CRL | EL8,EL9: `fetch-crl.timer`
EL7: `fetch-crl-boot` and `fetch-crl-cron` | See [CA documentation](../../common/ca.md#managing-fetch-crl-services) for more info | +| Fetch CRL | `fetch-crl.timer` | See [CA documentation](../../common/ca.md#managing-fetch-crl-services) for more info | | XRootD | `xrootd@standalone` | Primary xrootd service if _not_ running in [multi-user mode](#enabling-multi-user-support) | | XRootD Multi-user | `xrootd-privileged@standalone` | Primary xrootd service to start _instead of_ `xrootd@standalone` if running in [multi-user mode](#enabling-multi-user-support) | diff --git a/docs/data/xrootd/install-storage-element.md b/docs/data/xrootd/install-storage-element.md index 2c85b5a9c..1d79f81de 100644 --- a/docs/data/xrootd/install-storage-element.md +++ b/docs/data/xrootd/install-storage-element.md @@ -5,7 +5,7 @@ Installing an XRootD Storage Element ==================================== !!! warning - This page is out of date and is not known to work with XRootD 5; parts of it do not work with EL 7+. + This page is out of date. [XRootD](http://xrootd.org/) is a hierarchical storage system that can be used in a variety of ways to access data, typically distributed among actual storage resources. @@ -354,7 +354,7 @@ root@host # systemctl start xrootd@standalone The services are: -| Service | EL 7 & 8 service name | +| Service | EL 8 & 9 service name | |:---------------------------|:-----------------------------| | XRootD (standalone config) | `xrootd@standalone` | | XRootD (clustered config) | `xrootd@clustered` | @@ -365,7 +365,7 @@ The services are: As a reminder, here are common service commands (all run as `root`): -| To ... | On EL 7 & 8, run the command... | +| To ... | On EL 8 & 9, run the command... | |:--------------------------------------------|:---------------------------------| | Start a service | `systemctl start SERVICE-NAME` | | Stop a service | `systemctl stop SERVICE-NAME` | diff --git a/docs/data/xrootd/xrootd-authorization.md b/docs/data/xrootd/xrootd-authorization.md index 8abe7bd18..5a921e334 100644 --- a/docs/data/xrootd/xrootd-authorization.md +++ b/docs/data/xrootd/xrootd-authorization.md @@ -93,7 +93,6 @@ Authorizing X.509 proxies Authorizations for proxy-based security are declared in an [XRootD authorization database file](#authorization-database). XRootD authentication plugins are used to provide the mappings that are used in the database. -Starting with [OSG 3.6](../../release/release_series.md#series-overviews), DN mappings are performed with XRootD's built-in GSI support, and FQAN mappings are with the XRootD-VOMS (`XrdVoms`) plugin. @@ -350,113 +349,12 @@ pair, `xrootd-client`, and `voms-clients-cpp` installed: If your transfer does not succeed, re-run `xrdcp` with `--debug 2` for more information. -Updating to OSG 23 +Updating to OSG 24 ------------------ -There are no manual steps necessary for authentication to work when upgrading from OSG 3.6 to OSG 23. -If you are upgrading from an earlier release series, see the [updating to OSG 3.6](#updating-to-osg-36) section below. - -Updating to OSG 3.6 --------------------- - -There are some manual steps that need to be taken for authentication to work in OSG 3.6. - - -### Ensure OSG XRootD packages are fully up-to-date - -Some authentication configuration is provided by OSG packaging. -Old versions of the packages may result in broken configuration. -It is best if your packages match the versions in the appropriate `release` subdirectories of -, but at the very least these should be true: - -- `xrootd >= 5.4` -- `xrootd-multiuser >= 2` (if using multiuser) -- `xrootd-scitokens >= 5.4` (if using SciTokens/WLCG Tokens) -- `xrootd-voms >= 5.4.2-1.1` (if using VOMS auth) -- `osg-xrootd >= 3.6` -- `osg-xrootd-standalone >= 3.6` (if installed) -- `xcache >= 3` (if using xcache-derived software such as stash-cache, stash-origin, atlas-xcache, or cms-xcache) - - -### SciToken auth - -#### Updating from XRootD 4 (OSG 3.5 without 3.5-upcoming) - -The config syntax for adding auth plugins has changed between XRootD 4 and XRootD 5. -Replace -``` -ofs.authlib libXrdAccSciTokens.so ... -``` -with -``` -ofs.authlib ++ libXrdAccSciTokens.so ... -``` - -#### Updating from XRootD 5 (OSG 3.5 with 3.5-upcoming) - -No config changes are necessary. - - -### Proxy auth: transitioning from XrdLcmaps to XrdVoms - -In OSG 3.5 and previous, proxy authentication was handled by the XrdLcmaps plugin, provided in the `xrootd-lcmaps` RPM. -This is no longer the case in OSG 3.6; instead it is handled by the XrdVoms plugin, provided in the `xrootd-voms` RPM. - -To continue using proxy authentication, [update your configuration](#updating-xrootd-configuration) -and [your authorization database (Authfile)](#updating-your-authorization-database) -as described below. - -#### Updating XRootD configuration +There are no manual steps necessary for authentication to work when upgrading from OSG 23 to OSG 24. -- **Remove any old config in `/etc/xrootd` and `/etc/xrootd/config.d` that mentions LCMAPS or `libXrdLcmaps.so`, - otherwise XRootD may fail to start.** - -- If you do not have both an unauthenticated stash-cache and an authenticated stash-cache on the same server, - uncomment `set EnableVoms = 1` in `/etc/xrootd/config.d/10-osg-xrdvoms.cfg`. - -- If you have both an an authenticated stash-cache and an unauthenticated stash-cache on the same server, - add the following block to `/etc/xrootd/config.d/10-osg-xrdvoms.cfg`: - - if named stash-cache-auth - set EnableVoms = 1 - fi - -- If you are using XRootD Multiuser, create a VOMS Mapfile at `/etc/grid-security/voms-mapfile`, - with the syntax [described above](#mapping-voms-attributes-to-users), - then add `voms.mapfile /etc/grid-security/voms-mapfile` to your XRootD config if it's not already present. - -!!! note - In order to make `yum update` easier, `xrootd-lcmaps` has been replaced with an empty package, which can be removed after upgrading. - - -#### Updating your authorization database - -Unlike the XrdLcmaps plugin, which mapped VOMS FQANs to users `u`, the XrdVoms plugin maps FQANs to -groups `g`, roles `r`, and organizations `o`, as described in the [mapping VOMS attributes section](#mapping-voms-attributes). -You can still [use a VOMS mapfile](#mapping-voms-attributes-to-users) -but if you want to use the mappings provided at `/usr/share/osg/voms-mapfile-default` -by the `vo-client-lcmaps-voms` package, -you must copy them to `/etc/grid-security/voms-mapfile`. - - -Replace mappings based on users with mappings based on the other attributes. -For example, instead of -``` -u uscmslocal /uscms rl -``` -use -``` -g /cms/uscms /uscms rl -``` - -If you need to make a mapping based on group _and_ role, create and use a "compound ID" as described in -[the XRootD security documentation](https://xrootd.slac.stanford.edu/doc/dev47/sec_config.htm#_Toc489606599). - -``` -# create the ID named "cmsprod" -= cmsprod g /cms r Production - -# use it -x cmsprod /cmsprod rl -``` +Updating to OSG 23 +------------------ +There are no manual steps necessary for authentication to work when upgrading from OSG 3.6 to OSG 23. diff --git a/docs/other/configuration-with-osg-configure.md b/docs/other/configuration-with-osg-configure.md index 1eb8d4304..5fef590ba 100644 --- a/docs/other/configuration-with-osg-configure.md +++ b/docs/other/configuration-with-osg-configure.md @@ -323,7 +323,7 @@ This section is contained in `/etc/osg/config.d/35-pilot.ini` | **max\_wall\_time** | Positive Integer | The maximum wall-clock time a job is allowed to run for this pilot type, in minutes | | **queue** | String | The queue or partition which jobs should be submitted to in order to run on this resource. Equivalent to the HTCondor grid universe classad attribute `remote_queue` | | **require\_singularity** | true, false | True if the pilot should require singularity or apptainer on the workers. | -| **os** | Comma-separated List | The OS of the workers; allowed values are `rhel6`, `rhel7`, `rhel8`, or `ubuntu18`. This is required unless require_singularity = true | +| **os** | Comma-separated List | The OS of the workers; allowed values are `rhel8` and `rhel9`. This is required unless require_singularity = true | | **send\_tests* | true, false | Send test pilots? Currently not working, placeholder | | **allowed\_vos** | Comma-separated List or `*` | A comma-separated list of collaborations that are allowed to submit to this subcluster | diff --git a/docs/other/install-gwms-frontend.md b/docs/other/install-gwms-frontend.md index 67087ae1a..bab5f6b8a 100644 --- a/docs/other/install-gwms-frontend.md +++ b/docs/other/install-gwms-frontend.md @@ -338,7 +338,11 @@ For most installations create a new file named `/etc/condor/config.d/92_local_co The above procedure will work if you are using the OSG HTCondor RPMS. You can verify that you used the OSG HTCondor RPM by using `yum list condor`. The -version name should include "osg", e.g. `8.6.4-3.osg.el7`. +package repository should be "osg", e.g. +``` console +Available Packages +condor.x86_64 24.0.16-1.el9 osg +``` If you are using the UW Madison HTCondor RPMS, be aware of the following changes: @@ -663,7 +667,6 @@ groupwould only match jobs that have the `+is_itb=True` ClassAd. 6. Reconfigure the Frontend (see the [section below](#reconfiguring-glideinwms)): :::console - # on EL7 systems systemctl reload gwms-frontend Using GlideinWMS @@ -675,7 +678,7 @@ In addition to the GlideinWMS service itself, there are a number of supporting s | Software | Service name | Notes | |:-----------|:--------------------------------------|:----------------------------------------------------------------------------------| -| Fetch CRL | EL8: `fetch-crl.timer`
EL7: `fetch-crl-boot` and `fetch-crl-cron` | See [CA documentation](../common/ca.md#managing-fetch-crl-services) for more info | +| Fetch CRL | `fetch-crl.timer` | See [CA documentation](../common/ca.md#managing-fetch-crl-services) for more info | | Gratia | `gratia-probes-cron` | Accounting software | | HTCondor | `condor` | | | HTTPD | `httpd` | GlideinWMS monitoring and staging | diff --git a/docs/other/osg-token-renewer.md b/docs/other/osg-token-renewer.md index f59c42a73..430da2363 100644 --- a/docs/other/osg-token-renewer.md +++ b/docs/other/osg-token-renewer.md @@ -159,9 +159,9 @@ Managing the OSG Token Renewal Service These services are managed by `systemctl` and may start additional services as dependencies. -As a reminder, here are common service commands (all run as `root`) for EL7: +As a reminder, here are common service commands (all run as `root`): -| To... | On EL7, run the command... | +| To... | Run the command... | | :-------------------------------------- | :--------------------------------- | | Start a service | `systemctl start ` | | Stop a service | `systemctl stop ` | diff --git a/docs/other/troubleshooting-gratia.md b/docs/other/troubleshooting-gratia.md index 753eec4c1..c7f4262ec 100644 --- a/docs/other/troubleshooting-gratia.md +++ b/docs/other/troubleshooting-gratia.md @@ -46,7 +46,7 @@ If you are still not sure, you can run the following command to determine if thi ``` console $ rpm -q osg-ce -osg-ce-3.6-4.osg36.el7.x86_64 +osg-ce-24-1.osg24.el9.x86_64 ``` If the output is blank, then you are not working with a CE host. @@ -78,7 +78,7 @@ type of host that you are troubleshooting: If they are not running, consult the relevant documentation to enable and start the appropriate service: - [Access Point](../submit/install-ospool-ap.md#managing-services) -- [Compute Entrypoint](https://htcondor.com/htcondor-ce/v23/operation/#managing-htcondor-ce-services) +- [Compute Entrypoint](https://htcondor.com/htcondor-ce/v24/operation/#managing-htcondor-ce-services) ### Identifying failures ### @@ -128,7 +128,20 @@ host you are troubleshooting: | Access Point | `rpm -q --verify gratia-probe-condor-ap` | | Compute Entrypoint | `rpm -q --verify gratia-probe-htcondor-ce` | - +893b60ba6 * origin/SOFTWARE-6012-osg-24 Add estimated EOL for OSG 23 and 24 +42672b78c * OSG 24 initial release notes (SOFTWARE-6012) +95a018d88 * Use new ospool-ep image name +737343b49 * Restore out-of-date OSG WN client doc +5c35225b2 * Fix squid, OIDC agent, and stash-* container references +dc5e9323b * Add missing period +01de6cd3b * Add initial major packages for OSG 24 +55dd7f800 * Add OSG 23 references back since it's still supported +f923d3e14 * Prefer EL9 over EL8 +8604829b8 * mwestphall/SOFTWARE-6012-osg-24 rename docker repo prefixes to osg-htc +a6ae410ee * remove references to el7 in most docs +e7e60ec31 * remove most references to OSG <23 from non-release-history docs +c1e458ac2 * Update docs to reference OSG-24 as the latest series +e11c452a9 * origin/master mat/master Merge pull request #199 from timtheisen/master ### Verifying configuration ### When troubleshooting Gratia, there are two different configurations to investigate: @@ -141,7 +154,6 @@ When troubleshooting Gratia, there are two different configurations to investiga The HTCondor and/or HTCondor-CE configuration determines where job history files are written and how often the Gratia probe Schedd cron job are run. -If you recently updated your host to OSG 3.6, it's important to verify the location of the job history files. ##### Access Points #### @@ -256,18 +268,6 @@ Verify that your Gratia configuration is correct in `/etc/gratia/condor-ap/Probe :::xml EnableProbe="1" -1. If you are updating an existing ProbeConfig from a pre-OSG 3.6 installation, - also ensure that the following values are set: - - | Option | Value | - |:---------------------------|---------------------------------------------------------------------------| - | `VOOverride` | The collaboration's resource pool of your AP, e.g. `osg` for an OSPool AP | - | `SuppressGridLocalRecords` | `"1"` | - | `MapUnknownToGroup` | `"1"` | - | `DataFolder` | `"/var/lib/condor/gratia/data/"` | - | `WorkingFolder` | `"/var/lib/condor/gratia/tmp/"` | - | `LogFolder` | `"/var/log/condor/gratia/"` | - #### Compute Entrypoints #### In normal cases, `osg-configure` manages the relevant ProbeConfig and it can be configured by modifying @@ -398,7 +398,6 @@ The most common RPMs you will see are: | RPM | Purpose | |:---------------------------|:------------------------------------------------------------------------------| | `gratia-probe-common` | Code shared between all Gratia probes | -| `gratia-probe-condor` | An empty probe to ease updates from OSG 3.5 to OSG 3.6. Can be safely removed | | `gratia-probe-condor-ap` | The probe that tracks Access Point usage | | `gratia-probe-htcondor-ce` | Probe that tracks HTCondor-CE usage | diff --git a/docs/release/osg-24.md b/docs/release/osg-24.md new file mode 100644 index 000000000..fbf28e0ba --- /dev/null +++ b/docs/release/osg-24.md @@ -0,0 +1,77 @@ +title: OSG 24 News + +OSG 24 News +=========== + +**Supported OS Versions:** EL8, EL9 (see [this document](supported_platforms.md) for details) + +OSG 24 is the second release series following our [annual release schedule](release_series.md) and includes support for +the ARM CPU architecture. +The initial release includes GlideinWMS 3.10.7, HTCondor 24.0.1, HTCondor 24.1.1, HTCondor-CE 24.0, and XRootD 5.7.0. + +OSG 24 will be supported for [approximately two years total](release_series.md#series-life-cycle). + +Latest News +----------- + +### October 31, 2024: Initial Release ### + +!!! info "Where is the OSG 24 worker node tarball?" + We plan to distribute the worker node tarball within the next week. Stay tuned for updates! + +This initial release contains the following notable changes compared to the current OSG 23 release: + +- [HTCondor 24.0.1](https://htcondor.readthedocs.io/en/24.0/version-history/lts-versions-24-0.html#version-24-0-1) + +- [HTCondor-CE 24.0.0](https://htcondor.com/htcondor-ce/v24/installation/htcondor-ce) + +- [Pelican 7.10.11](https://github.com/PelicanPlatform/pelican/releases/tag/v7.10.11): + the initial release of Pelican in the main line of the OSG Software Stack. + Pelican is the new foundational software for the [OSDF](../data/stashcache/overview.md). + Administrators of hosts installing `pelican` for the client are encouraged to upgrade to 7.10.11. + + !!! warning "OSDF origins / caches" + For operators of existing OSDF caches or origins (formerly `stash-cache` or `stash-origin`, respectively), + we recommend waiting for the release of Pelican 7.11 and accompanying `osdf-server` RPMs before upgrading. + +- [HTCondor 24.1.1](https://htcondor.readthedocs.io/en/24.x/version-history/feature-versions-24-x.html#version-24-1-1) + in [OSG Upcoming](../common/yum.md#upcoming-software) + +- OSG PKI tools 3.7.1-2: fix an issue with missing `python3-*` dependencies + +- `ospool-ap` replaces the `osg-flock` RPM + +#### Package removals #### + +The following packages were removed from OSG 24: + +- `hosted-ce-tools`: moved into relevant container images +- `voms`: available in EPEL +- `x509-token-issuer`: removed due to lack of demand + +#### Container images #### + +!!! question "Where are the other OSG images?" + We intend to release `atlas-xcache`, `cms-xcache`, `frontier-squid`, `oidc-agent`, `osg-wn` container images by the + end of the year. + + `stash-cache` and `stash-origin` images will be replaced by `pelican_platform/osdf-cache` and + `/pelican_platform/osdf-origin` images, respectively. + +The following container images have new tags for OSG 24: + +| Image name | Tags | +|:---------------------------------------------------------|:-------------| +| `hub.opensciencegrid.org/osg-htc/ospool-ep` | `24-release` | + +Announcements +------------- + +Updates to critical packages also announced by email and are sent to the following recipients and lists: + +- [Registered administrative contacts](../common/registration.md#registering-resources) +- [osg-general@opensciencegrid.org](https://listserv.fnal.gov/scripts/wa.exe?A0=OSG-GENERAL) +- [operations@osg-htc.org](https://listserv.fnal.gov/scripts/wa.exe?A0=OSG-OPERATIONS) +- [osg-sites@opensciencegrid.org](https://listserv.fnal.gov/scripts/wa.exe?A0=OSG-SITES) +- [site-announce@opensciencegrid.org](https://listserv.fnal.gov/scripts/wa.exe?A0=site-announce) +- [software-discuss@osg-htc.org](https://groups.google.com/a/osg-htc.org/g/software-discuss) diff --git a/docs/release/osg-36.md b/docs/release/osg-36.md index 295d6b8fe..134262207 100644 --- a/docs/release/osg-36.md +++ b/docs/release/osg-36.md @@ -9,7 +9,7 @@ OSG 3.6 News See our [Release Series documentation](./release_series.md#series-life-cycle) for more details. Note that the OSG 3.6 end-of-life coincides with the wider Enterprise Linux 7 end-of-life. - **We recommend upgrading to OSG 23 and an Enteprise Linux 9 distribution at your earliest convenience.** + **We recommend upgrading to OSG 24 and an Enteprise Linux 9 distribution at your earliest convenience.** **Supported OS Versions:** EL7, EL8, EL9 diff --git a/docs/release/release_series.md b/docs/release/release_series.md index d9cc542c4..f34570815 100644 --- a/docs/release/release_series.md +++ b/docs/release/release_series.md @@ -34,11 +34,14 @@ Series Overviews Since the start of the RPM-based OSG Software Stack, we have offered the following release series: +- **OSG 24** (started October 2024) introduces support for the ARM architecture. + The initial release includes GlideinWMS 3.10.7, HTCondor 24.0.1, HTCondor 24.1.1, HTCondor-CE 24.0, and XRootD 5.7.0. + - **OSG 23** (started October 2023) aligns the OSG release series and HTCondor Software Suite release cycles. - The initial release includes GlideinWMS 3.10.5, HTCondor 23.0, HTCondor-CE 23.0, and XRootD 5.6.2 + The initial release includes GlideinWMS 3.10.5, HTCondor 23.0, HTCondor-CE 23.0, and XRootD 5.6.2. -- **OSG 3.6** (started February 2021) overhauls the authentication and data transfer protocols used in the OSG - software stack: +- **OSG 3.6** (started February 2021, end-of-lifed June 2024) overhauled the authentication and data transfer + protocols used in the OSG software stack: bearer tokens, such as [SciTokens](https://scitokens.org/) or WLCG tokens, are used for authentication instead of GSI proxies and HTTP is used for data transfer instead of GridFTP. See the [OSG GridFTP and GSI migration plan](https://osg-htc.org/technology/policy/gridftp-gsi-migration/) @@ -73,7 +76,8 @@ Support ends at the end of the month of the following dates unless otherwise spe | Release Series | Initial Release | End of Regular Support | End of Critical Bug/Security Support | |:--------------:|-----------------|------------------------|--------------------------------------| -| 23 | October 2023 | Not set | Not set | +| 24 | October 2024 | Estimated October 2026 | Estimated October 2026 | +| 23 | October 2023 | Estimated October 2025 | Estimated October 2025 | | 3.6 | Februrary 2021 | 31 March 2024 | 30 June 2024 | | 3.5 | August 2019 | 30 August 2021 | 1 May 2022 | | 3.4 | June 2017 | 29 February 2020 | 30 November 2020 | diff --git a/docs/release/signing.md b/docs/release/signing.md index ecd9b068d..896113733 100644 --- a/docs/release/signing.md +++ b/docs/release/signing.md @@ -18,40 +18,22 @@ $ rpm --checksig -v For example: ```console -$ rpm --checksig -v globus-core-8.0-2.osg.x86_64.rpm -globus-core-8.0-2.osg.x86_64.rpm: - Header V3 DSA signature: OK, key ID 824b8603 - Header SHA1 digest: OK (2b5af4348c548c27f10e2e47e1ec80500c4f85d7) - MD5 digest: OK (d11503a229a1a0e02262034efe0f7e46) - V3 DSA signature: OK, key ID 824b8603 +# rpm --import https://repo.osg-htc.org/osg/RPM-GPG-KEY-OSG-24-developer +# rpm --checksig -v https://repo.opensciencegrid.org/osg/24-main/osg-24-main-el9-release-latest.rpm +https://repo.opensciencegrid.org/osg/24-main/osg-24-main-el9-release-latest.rpm: + Header V4 RSA/SHA256 Signature, key ID effc3be6: OK + Header SHA256 digest: OK + Header SHA1 digest: OK + Payload SHA256 digest: OK + V4 RSA/SHA256 Signature, key ID effc3be6: OK + MD5 digest: OK ``` The OSG Packaging Signing Keys ------------------------------ The OSG Software Team has several GPG keys for signing RPMs; -The key used depends on the OSG version and EL variant used, as documented below: - -| Key 1 (3.0 to 3.5) | | -|--------------------|--------------------------------------------------------| -| Location | `/etc/pki/rpm-gpg/RPM-GPG-KEY-OSG` | -| Download | [UW-Madison](https://vdt.cs.wisc.edu/RPM-GPG-KEY-OSG), [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG) | -| Fingerprint | `6459 !D9D2 AAA9 AB67 A251 FB44 2110 !B1C8 824B 8603` | -| Key ID | `824b8603` | - -| Key 2 (3.6 and on, EL <= 8) | | -|--------------------|--------------------------------------------------------| -| Location | `/etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-2` | -| Download | [UW-Madison](https://vdt.cs.wisc.edu/RPM-GPG-KEY-OSG-2), [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG-2) | -| Fingerprint | `1216 FF68 897A 77EA 222F C961 27DC 6864 96D2 B90F` | -| Key ID | `96d2b90f` | - -| Key 4 (3.6 and on, EL >= 9) | | -|--------------------|--------------------------------------------------------| -| Location | `/etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-4` | -| Download | [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG-4) | -| Fingerprint | `B77E 70A6 0537 1D3B E109 A18E 3170 E150 1887 C61A` | -| Key ID | `1887c61a` | +The key used depends on the OSG version and software repository used, as documented below: | OSG 23 Automated Signing Key | | |--------------------|--------------------------------------------------------| @@ -59,6 +41,7 @@ The key used depends on the OSG version and EL variant used, as documented below | Download | [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG-23-auto) | | Fingerprint | `E2AF 9F6E 239F D62B 5377 05C0 1760 EDF6 4D43 84D0` | | Key ID | `4d4384d0` | +| Repositories | osg-23-development | | OSG 23 Developer Signing Key | | |--------------------|--------------------------------------------------------| @@ -66,46 +49,28 @@ The key used depends on the OSG version and EL variant used, as documented below | Download | [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG-23-developer) | | Fingerprint | `4A56 C5BB CDB0 AAA2 DDE9 A690 BDEE E24C 9289 7C00` | | Key ID | `92897c00` | +| Repositories | All non-development osg-23 repositories | + +| OSG 24 Automated Signing Key | | +|--------------------|--------------------------------------------------------| +| Location | `/etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-24-auto` | +| Download | [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG-24-auto) | +| Fingerprint | `E612 A4B4 2EE0 71C3 15D1 1CDB 51F0 C137 34E9 58B3` | +| Key ID | `34e958b3` | +| Repositories | osg-24-development | -!!! note - Some packages in the 3.6 repos may still be signed with the old key; - the `osg-release` RPM contains both keys so you can verify old packages. +| OSG 24 Developer Signing Key | | +|--------------------|--------------------------------------------------------| +| Location | `/etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-24-developer` | +| Download | [GitHub](https://raw.githubusercontent.com/opensciencegrid/docs/master/docs/release/RPM-GPG-KEY-OSG-24-developer) | +| Fingerprint | `F77F E0C7 0A9B AA73 9FD3 52C9 9DF7 5B52 EFFC 3BE6` | +| Key ID | `effc3be6` | +| Repositories | All non-development osg-24 repositories | You can see the fingerprint for yourself. -On EL 7 and older (GnuPG < 2.1.13): - -```console -$ gpg --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG -pub 1024D/824B8603 2011-09-15 OSG Software Team (RPM Signing Key for Koji Packages) - Key fingerprint = 6459 D9D2 AAA9 AB67 A251 FB44 2110 B1C8 824B 8603 -sub 2048g/28E5857C 2011-09-15 - -$ gpg --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-2 -pub 4096R/96D2B90F 2021-02-24 Open Science Grid Software - Key fingerprint = 1216 FF68 897A 77EA 222F C961 27DC 6864 96D2 B90F -sub 4096R/49E9ACC2 2021-02-24 -``` On EL 8 and newer (GnuPG >= 2.1.13): ```console -$ gpg --import-options show-only --import < /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG -pub dsa1024 2011-09-15 [SC] - 6459D9D2AAA9AB67A251FB442110B1C8824B8603 -uid OSG Software Team (RPM Signing Key for Koji Packages) -sub elg2048 2011-09-15 [E] - -$ gpg --import-options show-only --import < /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-2 -pub rsa4096 2021-02-24 [SC] - 1216FF68897A77EA222FC96127DC686496D2B90F -uid Open Science Grid Software -sub rsa4096 2021-02-24 [E] - -$ gpg --import-options show-only --import < /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-4 -pub rsa4096 2022-12-28 [SC] - B77E70A605371D3BE109A18E3170E1501887C61A -uid OSG Software 3.6 for EL9 RSA -sub rsa4096 2022-12-28 [E] - $ gpg --import-options show-only --import < /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-23-auto pub rsa4096 2023-06-23 [SC] E2AF9F6E239FD62B537705C01760EDF64D4384D0 @@ -118,5 +83,17 @@ pub rsa4096 2023-08-15 [SC] uid OSG 23 Developer Signing Key sub rsa4096 2023-08-15 [E] +$ gpg --import-options show-only --import < /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-24-auto +pub rsa4096 2024-08-20 [SC] + Key fingerprint = E612A4B42EE071C315D11CDB51F0C13734E958B3 +uid OSG 24 Automated Signing Key +sub rsa4096 2024-08-20 [E] + +$ gpg --import-options show-only --import < /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-24-developer +pub rsa4096 2024-08-20 [SC] + F77FE0C70A9BAA739FD352C99DF75B52EFFC3BE6 +uid OSG 24 Developer Signing Key +sub rsa4096 2024-08-20 [E] + ``` diff --git a/docs/release/supported_platforms.md b/docs/release/supported_platforms.md index 485d589d8..71b94f04c 100644 --- a/docs/release/supported_platforms.md +++ b/docs/release/supported_platforms.md @@ -3,18 +3,10 @@ title: Supported Platforms Supported Platforms =================== -!!! danger "Upcoming OSG 3.6 end-of-support dates" - OSG 3.6 reaches the "End of Regular Support" on March 31, 2024 and will only receive critical bug-fix and - security updates until its end-of-life on June 30, 2024. - See our [Release Series documentation](./release_series.md#series-life-cycle) for more details. - - Note that the OSG 3.6 end-of-life coincides with the wider Enterprise Linux 7 end-of-life. - **We recommend upgrading to OSG 23 and an Enteprise Linux 9 distribution at your earliest convenience.** - The OSG Software [Release Series](../release/release_series.md) are supported on Red Hat Enterprise Linux (RHEL) compatible platforms for 64-bit Intel architectures according to the following table: -| Platform | OSG 3.6 | OSG 23 | +| Platform | OSG 23 | OSG 24 | |----------------------------|---------|---------| | Alma Linux 9 | ✅ | ✅ | | CentOS Stream 9 | ✅ | ✅ | @@ -24,12 +16,18 @@ compatible platforms for 64-bit Intel architectures according to the following t | CentOS Stream 8 | ✅ | ✅ | | Red Hat Enterprise Linux 8 | ✅ | ✅ | | Rocky Linux 8 | ✅ | ✅ | -| CentOS 7 | ✅ | | -| Red Hat Enterprise Linux 7 | ✅ | | -| Scientifix Linux 7 | ✅ | | -OSG builds and tests its RPMs on the latest releases of the relevant platforms (e.g., in 2023, the RHEL 9 builds were -based on RHEL 9.2). + +Starting in OSG 24, the above platforms are also supported on 64-bit ARM architecture: + +| Architecture | OSG 23 | OSG 24 | +|----------------------------|---------|---------| +| 64-bit Intel (amd64) | ✅ | ✅ | +| 64-bit ARM (aarch64) | | ✅ | + + +OSG builds and tests its RPMs on the latest releases of the relevant platforms (e.g., in 2024, the RHEL 9 builds were +based on RHEL 9.4). Older platform release versions may not receive thorough testing and may have subtle bugs. If you run into problems with an older OS version, you will be asked them to update to the latest operating system packages as part of the support process. @@ -47,5 +45,5 @@ OSG Software [Release Series](../release/release_series.md): | Release Series | Expected Release | EL8 | EL9 | |----------------|------------------|---------|---------| -| OSG 24 | Q3 2024 | ✅ | ✅ | | OSG 25 | Q3 2025 | ✅ | ✅ | +| OSG 26 | Q3 2026 | ✅ | ✅ | diff --git a/docs/release/yum-basics.md b/docs/release/yum-basics.md index 280388005..d1a5219bc 100644 --- a/docs/release/yum-basics.md +++ b/docs/release/yum-basics.md @@ -15,35 +15,47 @@ Installation Installation is done with the `yum install` command. Each of the individual installation guide shows you the correct command to use to do an installation. Here is an example installation with all of the output from yum. ```console -root@host # sudo yum install osg-ca-certs -OSG Software for Enterprise Linux 9 - x86_64 668 kB/s | 438 kB 00:00 +root@host# yum install osg-ca-certs +OSG Software for Enterprise Linux 9 - x86_64 5.1 kB/s | 1.4 kB 00:00 +OSG Software for Enterprise Linux 9 - Development - x86_64 1.4 MB/s | 470 kB 00:00 Dependencies resolved. -==================================================================================================================== - Package Architecture Version Repository Size -==================================================================================================================== +=================================================================================================== + Package Architecture Version Repository Size +=================================================================================================== Installing: - osg-ca-certs noarch 1.110-1.2.osg36.el9 osg 244 k + osg-ca-certs noarch 1.131-1.osg24.el9 osg 242 k Transaction Summary -==================================================================================================================== +=================================================================================================== Install 1 Package -Total download size: 244 k -Installed size: 340 k +Total download size: 242 k +Installed size: 483 k Is this ok [y/N]: y Downloading Packages: -osg-ca-certs-1.110-1.2.osg36.el9.noarch.rpm 1.5 MB/s | 244 kB 00:00 --------------------------------------------------------------------------------------------------------------------- -Total 1.0 MB/s | 244 kB 00:00 -OSG Software for Enterprise Linux 9 - x86_64 3.0 MB/s | 3.1 kB 00:00 -Importing GPG key 0x1887C61A: - Userid : "OSG Software 3.6 for EL9 RSA " - Fingerprint: B77E 70A6 0537 1D3B E109 A18E 3170 E150 1887 C61A - From : /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-4 +osg-ca-certs-1.131-1.osg24.el9.noarch.rpm 1.8 MB/s | 242 kB 00:00 +--------------------------------------------------------------------------------------------------- +Total 1.2 MB/s | 242 kB 00:00 +OSG Software for Enterprise Linux 9 - Development - x86_64 3.0 MB/s | 3.1 kB 00:00 +Importing GPG key 0xEFFC3BE6: + Userid : "OSG 24 Developer Signing Key " + Fingerprint: F77F E0C7 0A9B AA73 9FD3 52C9 9DF7 5B52 EFFC 3BE6 + From : /etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-24-developer Is this ok [y/N]: y -... +Key imported successfully +Running transaction check +Transaction check succeeded. +Running transaction test +Transaction test succeeded. +Running transaction + Preparing : 1/1 + Installing : osg-ca-certs-1.131-1.osg24.el9.noarch 1/1 + Verifying : osg-ca-certs-1.131-1.osg24.el9.noarch 1/1 + Installed: - osg-ca-certs-1.110-1.2.osg36.el9.noarch + osg-ca-certs-1.131-1.osg24.el9.noarch + +Complete! ``` **Please Note**: When you first install a package from the OSG repository, you will be prompted to import the GPG key. We use this key to sign our RPMs as a security measure. You should double-check the key id (above it is 824B8603) with the [information on our signed RPMs](signing.md). If it doesn't match, there is a problem somewhere and you should report it to the OSG via help@osg-htc.org. @@ -125,7 +137,7 @@ If you want to know what other things are in a package--perhaps the other availa What else does a package install? --------------------------------- -Sometimes you need to understand what other software is installed by a package. This can be particularly useful for understanding *meta-packages*, which are packages such as the `osg-wn-client` (worker node client) that contain nothing by themselves but only depend on other RPMs. To do this, use the `--requires` option to rpm. For example, you can see that the worker node client (as of OSG 3.6.0 in early June, 2023) will install `curl`, `uberftp`, `wget`, and a dozen or so other packages. +Sometimes you need to understand what other software is installed by a package. This can be particularly useful for understanding *meta-packages*, which are packages such as the `osg-wn-client` (worker node client) that contain nothing by themselves but only depend on other RPMs. To do this, use the `--requires` option to rpm. For example, you can see that the worker node client (as of OSG 24 in October, 2024) will install `curl`, `uberftp`, `wget`, and a dozen or so other packages. :::console user@host $ rpm -q --requires osg-wn-client @@ -133,7 +145,7 @@ Sometimes you need to understand what other software is installed by a package. /usr/bin/ldapsearch /usr/bin/wget /usr/bin/xrdcp - config(osg-wn-client) = 3.6-6.osg36.el9 + config(osg-wn-client) = 24-1.osg24.el9 fetch-crl gfal2 gfal2-plugin-file @@ -158,16 +170,19 @@ It is normally best to read the OSG documentation to decide which packages to in :::console user@host $ yum list "voms*" Available Packages - voms.x86_64 2.1.0-0.27.rc3.el9 epel - voms-clients-cpp.x86_64 2.1.0-0.27.rc3.el9 epel - voms-api-java.noarch 3.3.2-11.el9 epel - voms-api-java-javadoc.noarch 3.3.2-11.el9 epel - voms-clients-java.noarch 3.3.2-5.el9 epel - voms-devel.x86_64 2.1.0-0.27.rc3.el9 epel - voms-doc.noarch 2.1.0-0.27.rc3.el9 epel - voms-mysql-plugin.x86_64 3.1.7-13.el9 epel - voms-server.x86_64 2.1.0-0.27.rc3.el9 epel - + voms.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-api-java.noarch 3.3.3-1.el9 epel + voms-api-java-javadoc.noarch 3.3.3-1.el9 epel + voms-clients-cpp.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-clients-cpp-debuginfo.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-clients-java.noarch 3.3.3-1.el9 epel + voms-debuginfo.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-debugsource.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-devel.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-doc.noarch 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-mysql-plugin.x86_64 3.1.7-13.el9 epel + voms-server.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg + voms-server-debuginfo.x86_64 2.1.0-0.31.rc3.2.osg24.el9 osg If you want to search for packages that contain VOMS anywhere in the name or description, you can use `yum search`: @@ -192,21 +207,25 @@ One last example, if you want to know what RPM would give you the `voms-proxy-in :::console user@host $ yum whatprovides "*voms-proxy-init" - voms-clients-cpp-2.1.0-0.27.rc3.el9.x86_64 : Virtual Organization Membership Service Clients - Repo : @System + voms-clients-cpp-2.1.0-0.31.rc3.1.osg24.el9.x86_64 : Virtual Organization Membership Service Clients + Repo : osg Matched from: Other : *voms-proxy-init - voms-clients-cpp-2.1.0-0.27.rc3.el9.x86_64 : Virtual Organization Membership Service Clients - Repo : epel + voms-clients-cpp-2.1.0-0.31.rc3.2.osg24.el9.x86_64 : Virtual Organization Membership Service Clients + Repo : osg Matched from: Other : *voms-proxy-init - voms-clients-java-3.3.2-5.el9.noarch : Virtual Organization Membership Service Java clients + voms-clients-cpp-2.1.0-1.el9.x86_64 : Virtual Organization Membership Service Clients Repo : epel Matched from: Other : *voms-proxy-init + voms-clients-java-3.3.3-1.el9.noarch : Virtual Organization Membership Service Java clients + Repo : epel + Matched from: + Other : *voms-proxy-init Removing Packages ----------------- @@ -220,9 +239,9 @@ Dependencies resolved. Package Architecture Version Repository Size ==================================================================================================================== Removing: - voms x86_64 2.1.0-0.27.rc3.el9 @epel 432 k + voms x86_64 2.1.0-0.31.rc3.2.osg24.el9 @epel 432 k Removing dependent packages: - osg-wn-client noarch 3.6-6.osg36.el9 @osg 211 + osg-wn-client noarch 23-1.osg24.el9 @osg 211 Transaction Summary ==================================================================================================================== @@ -237,7 +256,7 @@ Running Transaction ... etc ... Removed: - voms-2.1.0-0.27.rc3.el9.x86_64 + voms-2.1.0-0.31.rc3.2.osg24.el9.x86_64 Complete! ``` diff --git a/docs/resource-sharing/os-backfill-containers.md b/docs/resource-sharing/os-backfill-containers.md index b56324961..0dbbf5649 100644 --- a/docs/resource-sharing/os-backfill-containers.md +++ b/docs/resource-sharing/os-backfill-containers.md @@ -84,7 +84,7 @@ docker run -it --rm --user osg \ -e CVMFSEXEC_REPOS=" \ oasis.opensciencegrid.org \ singularity.opensciencegrid.org" \ - opensciencegrid/osgvo-docker-pilot:23-release + hub.opensciencegrid.org/osg-htc/ospool-ep:24-release ``` Replace `/path/to/token` with the location you saved the token obtained from the OSPool Token Registry. @@ -150,13 +150,11 @@ but the container will need fewer privileges. #### cvmfsexec !!! info "cvmfsexec System Requirements" - - On EL7, you must have kernel version >= 3.10.0-1127 (run `uname -vr` to check), and user namespaces enabled. - See step 1 in the - [Apptainer Install document](https://osg-htc.org/docs/worker-node/install-apptainer/#enabling-unprivileged-apptainer) - for details. - On EL8, you must have kernel version >= 4.18 (run `uname -vr` to check). + - On EL9, all kernel versions >= 5.0 should be supported. + See the [cvmfsexec README](https://github.com/cvmfs/cvmfsexec#readme) details. [cvmfsexec](https://github.com/CVMFS/cvmfsexec#readme) is a tool that can be used to mount CVMFS inside the container @@ -205,7 +203,7 @@ docker run -it --rm --user osg \ -e GLIDEIN_ResourceName="..." \ -e GLIDEIN_Start_Extra="True" \ -e OSG_SQUID_LOCATION="..." \ - opensciencegrid/osgvo-docker-pilot:23-release + hub.opensciencegrid.org/osg-htc/ospool-ep:24-release ``` Fill in the values for `/path/to/token`, `/worker-temp-dir`, `GLIDEIN_Site`, `GLIDEIN_ResourceName`, and `OSG_SQUID_LOCATION` [as above](#running-the-container-with-docker). diff --git a/docs/security/host-certs/lets-encrypt.md b/docs/security/host-certs/lets-encrypt.md index 41bb1a5f5..52e18d796 100644 --- a/docs/security/host-certs/lets-encrypt.md +++ b/docs/security/host-certs/lets-encrypt.md @@ -20,7 +20,7 @@ Let's Encrypt host certs expire every three months so it is important to set up Installation and Obtaining the Initial Certificate -------------------------------------------------- -1. Install the `certbot` package (available from the EPEL 7 repository): +1. Install the `certbot` package (available from the EPEL repository): :::console root@host # yum install certbot diff --git a/docs/security/tokens/overview.md b/docs/security/tokens/overview.md index 5dd1baaee..d238205bf 100644 --- a/docs/security/tokens/overview.md +++ b/docs/security/tokens/overview.md @@ -193,10 +193,6 @@ The following collaborations support support file transfer using WebDAV or XRoot | LIGO | Undergoing testing* | | OSG | Yes | -\* Currently, collaborations testing WebDAV or XRootD support will continue to support other file transfer protocols so -it should it should be safe to update your OSG WN clients to OSG 3.6. -If you have any questions, please contact your collaboration directly. - Help ---- diff --git a/docs/security/tokens/using-tokens.md b/docs/security/tokens/using-tokens.md index 250408c4b..1fb31c408 100644 --- a/docs/security/tokens/using-tokens.md +++ b/docs/security/tokens/using-tokens.md @@ -45,6 +45,10 @@ which is useful for looking inside tokens. #### Registering an OIDC profile +!!! info "Where is the OSG 24 container?" + We are actively reworking our image build infrastructure for OSG 24 and expect to have all OSG Software containers + available by the end of 2024. + 1. Start an agent container in the background and name it `my-agent` to easily run subsequent commands against it: :::console diff --git a/docs/site-maintenance.md b/docs/site-maintenance.md index 89a3e48a7..3f448f58f 100644 --- a/docs/site-maintenance.md +++ b/docs/site-maintenance.md @@ -58,6 +58,7 @@ Keep OSG Software Updated It is important to keep your software and data (e.g., CAs and VO client) up-to-date with the latest OSG release. See the release notes for your installed release series: +- [OSG 24 release notes](release/osg-24.md) - [OSG 23 release notes](release/osg-23.md) To stay abreast of software releases, we recommend subscribing to the mailing @@ -69,7 +70,7 @@ Notify OSG of Major Changes To avoid potential issues with OSG job submissions, please [notify us](mailto:help@osg-htc.org) of major changes to your site, including: -- Major OS version changes on the worker nodes (e.g., upgraded from EL 7 to EL 8) +- Major OS version changes on the worker nodes (e.g., upgraded from EL 8 to EL 9) - Adding or removing [container support through singularity or apptainer](worker-node/install-apptainer.md) - Policy changes regarding OSG resource requests (e.g., number of cores or GPUs, memory usage, or maximum walltime) - Scheduled or unscheduled [downtimes](common/registration.md#registering-resource-downtimes) diff --git a/docs/site-verification.md b/docs/site-verification.md index b062a9d0a..38bf8c652 100644 --- a/docs/site-verification.md +++ b/docs/site-verification.md @@ -29,7 +29,7 @@ Once you have validated job submission from within your site, request test pilot - The fully qualified domain name of the CE - Registered OSG resource name -- Supported OS version of your worker nodes (e.g., EL7, EL8, or a combination) +- Supported OS version of your worker nodes (e.g., EL8, EL9, or a combination) - Support for multicore jobs - Support for GPUs - Maximum job walltime diff --git a/docs/submit/install-ospool-ap.md b/docs/submit/install-ospool-ap.md index 473f7b0e0..13aebf171 100644 --- a/docs/submit/install-ospool-ap.md +++ b/docs/submit/install-ospool-ap.md @@ -112,7 +112,7 @@ required packages. Example on a RHEL 9 host: ```console # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm -# yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el9-release-latest.rpm +# yum install https://repo.opensciencegrid.org/osg/24-main/osg-24-main-el9-release-latest.rpm # yum install osg-flock ``` diff --git a/docs/worker-node/install-apptainer.md b/docs/worker-node/install-apptainer.md index 5a9f801f5..e837c286b 100644 --- a/docs/worker-node/install-apptainer.md +++ b/docs/worker-node/install-apptainer.md @@ -18,9 +18,7 @@ or with a setuid-root assist program. By default it does not install the setuid-root assist program and it uses only unprivileged user namespaces. Unprivileged user namespaces are available on all OS versions that -OSG supports, although it is not enabled by default on EL 7; -instructions to enable it are [below](#enabling-unprivileged-apptainer). -The feature is enabled by default on EL 8. +OSG supports. !!! danger "Kernel vs. Userspace Security" Enabling unprivileged user namespaces increases the risk to the @@ -65,8 +63,7 @@ There are two sets of instructions on this page: OSG VOs all support running apptainer directly from CVMFS, when CVMFS is available and unprivileged user namespaces are enabled. -Unprivileged user namespaces are enabled by default on EL 8, and OSG -recommends that system administrators enable it on EL 7 worker nodes. +Unprivileged user namespaces are enabled by default on EL 8+. When unprivileged user namespaces are enabled, OSG recommends that sites not install Apptainer unless they have non-OSG users that require it. @@ -81,142 +78,6 @@ and will need to install an additional apptainer-suid RPM if they want a setuid installation that does not require unprivileged user namespaces. -Enabling Unprivileged Apptainer -------------------------------- - -The instructions in this section are for enabling Apptainer to run -unprivileged by enabling unprivileged user namespaces. - -1. Enable user namespaces via `sysctl` on EL 7: - - If the operating system is an EL 7, enable unprivileged Apptainer - with the following steps. - This step is not needed on EL 8 because it is enabled by default. - - :::console - root@host # echo "user.max_user_namespaces = 15000" \ - > /etc/sysctl.d/90-max_user_namespaces.conf - root@host # sysctl -p /etc/sysctl.d/90-max_user_namespaces.conf - -1. (Recommended) Disable network namespaces: - - :::console - root@host # echo "user.max_net_namespaces = 0" \ - > /etc/sysctl.d/90-max_net_namespaces.conf - root@host # sysctl -p /etc/sysctl.d/90-max_net_namespaces.conf - - OSG VOs do not need network namespaces with Apptainer, and - disabling them significantly lowers the risk profile of enabling user - namespaces and reduces the frequency of needing to apply urgent updates. - Most of the kernel vulnerabilities related to unprivileged user - namespaces over the last few years have been in combination with - network namespaces. - - Network namespaces are, however, utilized by other software, - such as Docker or Podman. - Disabling network namespaces may break other - software, or limit its capabilities (such as requiring the - `--net=host` option in Docker or Podman). - - Disabling network namespaces blocks the systemd PrivateNetwork - feature, which is a feature that is used by some EL 8 services. - It is also configured for some EL 7 services but they are all - disabled by default. To check them all, look for PrivateNetwork in - `/lib/systemd/system/*.service` and see which of those services are - enabled but failed to start. The only default such service on EL 8 - is systemd-hostnamed, and a popular non-default such service is - mlocate-updatedb. The PrivateNetwork feature can be turned off for - a service without modifying an RPM-installed file through a - `.d/*.conf` file, for example for systemd-hostnamed: - - :::console - root@host # cd /etc/systemd/system - root@host # mkdir -p systemd-hostnamed.service.d - root@host # (echo "[Service]"; echo "PrivateNetwork=no") \ - >systemd-hostnamed.service.d/no-private-network.conf - root@host # systemctl daemon-reload - root@host # systemctl start systemd-hostnamed - root@host # systemctl status systemd-hostnamed - - -### Configuring Docker to work with Apptainer ### - -If docker is being used to run jobs, the following options are -recommended to allow unprivileged Apptainer to run (it does not -need `--privileged` or any added capabilities): - - ::console - --security-opt seccomp=unconfined --security-opt systempaths=unconfined - -`--security-opt seccomp=unconfined` enables unshare to be called -(which is needed to create namespaces), -and `--security-opt systempaths=unconfined` allows `/proc` to be mounted -in an unprivileged process namespace (as is done by apptainer exec -p). -`--security-opt systempaths=unconfined` requires Docker 19.03 or later. -The options are secure as long as the system administrator controls -the images and does not allow user code to run as root, and are -generally more secure than adding capabilities. If at this point no -setuid or setcap programs needs to be run within the container, adding the -following option will improve security by preventing any privilege -escalation (Apptainer uses the same feature on its containers): - - ::console - --security-opt no-new-privileges - -In addition, the following option is recommended for allowing -unprivileged fuse mounts: - - ::console - --device=/dev/fuse - -### Configuring Unprivileged Apptainer ### - -When unprivileged user namespaces are enabled and VOs run apptainer from -CVMFS, the Apptainer configuration file also comes from CVMFS so local -sites have no control over changing the configuration. However, the -most common local configuration change to the apptainer RPM is to add -additional local "bind path" options to map extra local file paths into -containers. This can instead be accomplished by setting the -`APPTAINER_BINDPATH` variable in the environment of jobs, for -example through -[configuration](../other/configuration-with-osg-configure.md#local-settings) -on your compute entrypoint. -This is a comma-separated list of paths to bind, following the syntax of the -`apptainer exec --bind` option. -In order to be backward compatible with Singularity, also set -`SINGULARITY_BINDPATH` to the same value. -Apptainer also recognizes that variable but it prints a deprecation -warning if only a `SINGULARITY_` variable is set without the -corresponding `APPTAINER_` variable. - -There are also other environment variables that can affect Apptainer -operation; see the -[Apptainer documentation](https://apptainer.org/docs/user/main/appendix.html) -for details. - -### Validating Unprivileged Apptainer in CVMFS ### - -If you will not be installing Apptainer locally and -you haven't yet installed [CVMFS](install-cvmfs.md), please do so. -Alternatively, use the -[cvmfsexec package](https://github.com/cvmfs-contrib/cvmfsexec) -configured for osg as an unprivileged user and mount the -oasis.opensciencegrid.org and singularity.opensciencegrid.org -repositories. - -Then as an unprivileged user verify that Apptainer in CVMFS works with this -command: - -```console -user@host $ /cvmfs/oasis.opensciencegrid.org/mis/apptainer/bin/apptainer \ - exec --contain --ipc --pid --bind /cvmfs \ - /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-el7:latest \ - ps -ef -UID PID PPID C STIME TTY TIME CMD -user 1 0 0 10:51 console 00:00:00 appinit -user 11 1 0 10:51 console 00:00:00 /usr/bin/ps -ef -``` - Installing Apptainer -------------------- diff --git a/docs/worker-node/install-cvmfs.md b/docs/worker-node/install-cvmfs.md index 5c0c18a96..1bfb31880 100644 --- a/docs/worker-node/install-cvmfs.md +++ b/docs/worker-node/install-cvmfs.md @@ -2,9 +2,6 @@ title: Installing and Maintaining the CernVM File System Client # Installing and Maintaining the CernVM File System Client -!!!bug "EL7 version compatibility" - There is an incompatibility with EL7 < 7.5 due to an old version of the `selinux-policy` package - The CernVM File System ([CVMFS](http://cernvm.cern.ch/portal/filesystem)) is an HTTP-based file distribution service used to provide data and software for jobs. diff --git a/docs/worker-node/install-wn-oasis.md b/docs/worker-node/install-wn-oasis.md index bc51fff70..6587fece0 100644 --- a/docs/worker-node/install-wn-oasis.md +++ b/docs/worker-node/install-wn-oasis.md @@ -36,8 +36,8 @@ Determine the OASIS path to the Worker Node Client software for your worker node | Worker Node OS | Use… | |:---------------|:-------------------------------------------------------------------------------------| -| EL 7 (64-bit) | `/cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/3.6/current/el7-x86_64` | -| EL 8 (64-bit) | `/cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/3.6/current/el8-x86_64` | +| EL 8 (64-bit) | `/cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/24/current/el8-x86_64` | +| EL 9 (64-bit) | `/cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/24/current/el9-x86_64` | On the CE, in the `/etc/osg/config.d/10-storage.ini` file, set the `grid_dir` configuration setting to the path from the previous step. @@ -65,8 +65,8 @@ If you must log onto a worker node and use the Worker Node Client software direc | Worker Node OS | Run the following command… | |:---------------|:-----------------------------------------------------------------------------------------------------| -| EL 7 (64-bit) | `source /cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/3.6/current/el7-x86_64/setup.sh` | -| EL 8 (64-bit) | `source /cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/3.6/current/el8-x86_64/setup.sh` | +| EL 8 (64-bit) | `source /cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/24/current/el8-x86_64/setup.sh` | +| EL 9 (64-bit) | `source /cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/24/current/el9-x86_64/setup.sh` | Getting Help ------------ diff --git a/docs/worker-node/install-wn-tarball.md b/docs/worker-node/install-wn-tarball.md index 51511138a..636007db4 100644 --- a/docs/worker-node/install-wn-tarball.md +++ b/docs/worker-node/install-wn-tarball.md @@ -25,6 +25,9 @@ Before starting, ensure the host has [a supported operating system](../release/s Download the WN Client ---------------------- +!!! info "Where is the OSG 24 worker node tarball?" + We plan to distribute the worker node tarball in the coming weeks. + Please pick the `osg-wn-client` tarball that is appropriate for your distribution and architecture. You will find them in . For OSG 23: @@ -32,12 +35,6 @@ For OSG 23: - [Binaries for RHEL8-compatible](https://repo.opensciencegrid.org/tarball-install/23-main/osg-wn-client-latest.el8.x86_64.tar.gz) - [Binaries for RHEL9-compatible](https://repo.opensciencegrid.org/tarball-install/23-main/osg-wn-client-latest.el9.x86_64.tar.gz) -For OSG 3.6: - -- [Binaries for RHEL7-compatible](https://repo.opensciencegrid.org/tarball-install/3.6/osg-wn-client-latest.el7.x86_64.tar.gz) -- [Binaries for RHEL8-compatible](https://repo.opensciencegrid.org/tarball-install/3.6/osg-wn-client-latest.el8.x86_64.tar.gz) -- [Binaries for RHEL9-compatible](https://repo.opensciencegrid.org/tarball-install/3.6/osg-wn-client-latest.el9.x86_64.tar.gz) - Install the WN Client --------------------- @@ -60,7 +57,7 @@ Example EL9 installation (in `/home/user/test-install`, the **`/ ```console user@host $ mkdir /home/user/test-install user@host $ cd /home/user/test-install -user@host $ wget https://repo.opensciencegrid.org/tarball-install/23-main/osg-wn-client-latest.el9.x86_64.tar.gz +user@host $ wget https://repo.opensciencegrid.org/tarball-install/24-main/osg-wn-client-latest.el9.x86_64.tar.gz user@host $ tar xzf osg-wn-client-latest.el9.x86_64.tar.gz user@host $ cd osg-wn-client user@host $ ./osg/osg-post-install @@ -113,7 +110,7 @@ Validating the Worker Node Client To verify functionality of the worker node client, you will need to submit a test job against your CE and verify the job's output. -1. Submit a job that executes the `env` command (e.g. Run [`condor_ce_trace`](https://htcondor.github.io/htcondor-ce/v23/troubleshooting/debugging-tools/#condor_ce_trace) with the `-d` flag from your HTCondor CE) +1. Submit a job that executes the `env` command (e.g. Run [`condor_ce_trace`](https://htcondor.github.io/htcondor-ce/v24/troubleshooting/debugging-tools/#condor_ce_trace) with the `-d` flag from your HTCondor CE) 2. Verify that the value of `$OSG_GRID` is set to the directory of your worker node client installation How to get Help? diff --git a/docs/worker-node/install-wn.md b/docs/worker-node/install-wn.md index 0c2daeb36..e161a163a 100644 --- a/docs/worker-node/install-wn.md +++ b/docs/worker-node/install-wn.md @@ -46,13 +46,7 @@ Fetch-CRL is the only service required to support the WN Client. | Software | Service name | Notes | |:----------|:--------------------------------------|:---------------------------------------------------------------------------------------| -| Fetch CRL | EL8: `fetch-crl.timer`
EL7: `fetch-crl-boot` and `fetch-crl-cron` | See [CA documentation](../common/ca.md) for more info | - -!!! note - `fetch-crl-boot` will begin fetching CRLS, which can take a few minutes and fail on transient errors. You can add configuration to ignore these transient errors in `/etc/fetch-crl.conf`: - - :::file - noerrors +| Fetch CRL | `fetch-crl.timer` | See [CA documentation](../common/ca.md) for more info | As a reminder, here are common service commands (all run as `root`): diff --git a/docs/worker-node/using-wn-containers.md b/docs/worker-node/using-wn-containers.md index cffa86aa5..29b4d1113 100644 --- a/docs/worker-node/using-wn-containers.md +++ b/docs/worker-node/using-wn-containers.md @@ -3,6 +3,10 @@ title: Using the Worker Node Containers Using the Worker Node Containers ================================ +!!! info "Where is the OSG 24 container?" + We are actively reworking our image build infrastructure for OSG 24 and expect to have all OSG Software containers + available by the end of 2024. + The OSG worker node containers contain the suggested base environment for worker nodes. They can be used as a base image to build containers or to perform testing. The containers are available on [Docker Hub](https://hub.docker.com/r/opensciencegrid/osg-wn/). @@ -37,4 +41,3 @@ You may perform testing from within the OSG worker node envionment by running th ``` root@host # docker run -ti --rm opensciencegrid/osg-wn:latest /bin/bash ``` - diff --git a/mkdocs.yml b/mkdocs.yml index 49fd2beb8..4570b4be7 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -77,12 +77,11 @@ nav: - 'Publishing VO data': 'data/stashcache/vo-data.md' - Releases: - 'Release Series': 'release/release_series.md' + - OSG 24: + - 'News': 'release/osg-24.md' - OSG 23: - 'News': 'release/osg-23.md' - 'Updating to OSG 23': 'release/updating-to-osg-23.md' - - OSG 3.6: - - 'News': 'release/osg-36.md' - - 'Updating to OSG 3.6': 'release/updating-to-osg-36.md' - 'Supported Platforms': 'release/supported_platforms.md' - 'OSG Yum Repos': 'common/yum.md' - 'Yum Basics': 'release/yum-basics.md'