From 5567c6af51a7fbee7dcd4fba0ede239e22bacea4 Mon Sep 17 00:00:00 2001 From: <> Date: Tue, 11 Jun 2024 22:25:43 +0000 Subject: [PATCH] Deployed 29ca33935 with MkDocs version: 1.3.0 --- release/updating-to-osg-23/index.html | 3 +-- search/search_index.json | 2 +- sitemap.xml.gz | Bin 860 -> 860 bytes 3 files changed, 2 insertions(+), 3 deletions(-) diff --git a/release/updating-to-osg-23/index.html b/release/updating-to-osg-23/index.html index f22a9e386..1d7f0a03d 100644 --- a/release/updating-to-osg-23/index.html +++ b/release/updating-to-osg-23/index.html @@ -2115,8 +2115,7 @@

Updating CE packagesrepository and RPM update process.

Starting CE services

After updating your RPMs and updating your configuration, turn on the HTCondor-CE service:

-
    :::console
-    root@host # systemctl start condor-ce
+
root@host # systemctl start condor-ce
 

Updating Your HTCondor Hosts

diff --git a/search/search_index.json b/search/search_index.json index 6e3e813d8..ef7e91199 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"OSG Site Documentation \u00b6 User documentation If you are a researcher interested in accessing OSG computational capacity, please consult our user documentation instead. The OSG Consortium provides common service and support for capacity providers and scientific institutions (i.e., \"sites\") using a distributed fabric of high throughput computational services. The OSG Consortium does not own computational capacity but provides software and services to users and capacity providers alike to enable the opportunistic usage and sharing of capacity. This documentation aims to provide HTC/HPC system administrators with the necessary information to contribute computational capacity to the OSG Consortium. Contributing to the OSG \u00b6 We offer two models for sites to contribute capacity to the OSG Consortium: one where OSG staff hosts and maintains capacity provisioning services for users; and the traditional model where the site hosts and maintains these same services. In both of these cases, the following will be needed: An existing compute cluster running on a supported operating system with a supported batch system: Grid Engine , HTCondor , LSF , PBS Pro / Torque , or Slurm . Outbound network connectivity from your cluster's worker nodes Temporary scratch space on each worker node Don't meet the requirements? If your site does not meet the above conditions, please contact us to discuss your options for contributing to the OSG Consortium. OSG-hosted services \u00b6 To contribute computational capacity with OSG-hosted services, your site will also need the following: Allow SSH access to your local cluster's login host from a known IP address Shared home directories on each cluster node Next steps If you are interested in OSG-hosted services, please contact us for a consultation, even if your site does not meet the conditions as outlined above! Self-hosted services \u00b6 If you are interested in contributing capacity by hosting your own OSG services, please continue with the site planning page.","title":"Home"},{"location":"#osg-site-documentation","text":"User documentation If you are a researcher interested in accessing OSG computational capacity, please consult our user documentation instead. The OSG Consortium provides common service and support for capacity providers and scientific institutions (i.e., \"sites\") using a distributed fabric of high throughput computational services. The OSG Consortium does not own computational capacity but provides software and services to users and capacity providers alike to enable the opportunistic usage and sharing of capacity. This documentation aims to provide HTC/HPC system administrators with the necessary information to contribute computational capacity to the OSG Consortium.","title":"OSG Site Documentation"},{"location":"#contributing-to-the-osg","text":"We offer two models for sites to contribute capacity to the OSG Consortium: one where OSG staff hosts and maintains capacity provisioning services for users; and the traditional model where the site hosts and maintains these same services. In both of these cases, the following will be needed: An existing compute cluster running on a supported operating system with a supported batch system: Grid Engine , HTCondor , LSF , PBS Pro / Torque , or Slurm . Outbound network connectivity from your cluster's worker nodes Temporary scratch space on each worker node Don't meet the requirements? If your site does not meet the above conditions, please contact us to discuss your options for contributing to the OSG Consortium.","title":"Contributing to the OSG"},{"location":"#osg-hosted-services","text":"To contribute computational capacity with OSG-hosted services, your site will also need the following: Allow SSH access to your local cluster's login host from a known IP address Shared home directories on each cluster node Next steps If you are interested in OSG-hosted services, please contact us for a consultation, even if your site does not meet the conditions as outlined above!","title":"OSG-hosted services"},{"location":"#self-hosted-services","text":"If you are interested in contributing capacity by hosting your own OSG services, please continue with the site planning page.","title":"Self-hosted services"},{"location":"site-maintenance/","text":"Site Maintenance \u00b6 This document outlines how to maintain your OSG site, including steps to take if you suspect that OSG jobs are causing issues. Handle Misbehaving Jobs \u00b6 In rare instances, you may experience issues at your site caused by misbehaving jobs (e.g., over-utilization of memory) from an OSG community or Virtual Organization (VO). If this occurs, you should immediately stop accepting job submissions from the OSG and remove the offending jobs: Configure your batch system to stop accepting jobs from the VO: For HTCondor batch systems, set the following in /etc/condor/config.d/ on your HTCondor-CE or Access Point accepting jobs from an OSG Hosted CE: SUBMIT_REQUIREMENT_Ban_OSG = (Owner != \"\") SUBMIT_REQUIREMENT_Ban_OSG_REASON = \"OSG pilot job submission temporarily disabled\" SUBMIT_REQUIREMENT_NAMES = $(SUBMIT_REQUIREMENT_NAMES) Ban_OSG Replacing with the name of the local Unix account corresponding to the problematic VO. For Slurm batch systems, disable the relevant Slurm partition : [root@host] # scontrol update PartitionName = State = DOWN Replacing with the name of the partition where you are sending OSG jobs. Remove the VO's jobs: For HTCondor batch systems, run the following command on your HTCondor-CE or Access Point accepting jobs from an OSG Hosted CE: [root@access-point] # condor_rm Replacing with the name of the local Unix account corresponding to the problematic VO. For Slurm batch systems, run the following command: [root@host] # scancel -u Replacing with the name of the local Unix account corresponding to the problematic VO. Let us know so that we can track down the offending software or user: the same issue that you're experiencing may also be affecting other sites! Keep OSG Software Updated \u00b6 It is important to keep your software and data (e.g., CAs and VO client) up-to-date with the latest OSG release. See the release notes for your installed release series: OSG 3.6 release notes To stay abreast of software releases, we recommend subscribing to the osg-sites@opensciencegrid.org mailing list. Notify OSG of Major Changes \u00b6 To avoid potential issues with OSG job submissions, please notify us of major changes to your site, including: Major OS version changes on the worker nodes (e.g., upgraded from EL 7 to EL 8) Adding or removing container support through singularity or apptainer Policy changes regarding OSG resource requests (e.g., number of cores or GPUs, memory usage, or maximum walltime) Scheduled or unscheduled downtimes Site topology changes such as additions, modifications, or retirements of OSG services Changes to site contacts, such as administrative or security staff Help \u00b6 If you need help with your site, or need to report a security incident, follow the contact instructions .","title":"Site Maintenance"},{"location":"site-maintenance/#site-maintenance","text":"This document outlines how to maintain your OSG site, including steps to take if you suspect that OSG jobs are causing issues.","title":"Site Maintenance"},{"location":"site-maintenance/#handle-misbehaving-jobs","text":"In rare instances, you may experience issues at your site caused by misbehaving jobs (e.g., over-utilization of memory) from an OSG community or Virtual Organization (VO). If this occurs, you should immediately stop accepting job submissions from the OSG and remove the offending jobs: Configure your batch system to stop accepting jobs from the VO: For HTCondor batch systems, set the following in /etc/condor/config.d/ on your HTCondor-CE or Access Point accepting jobs from an OSG Hosted CE: SUBMIT_REQUIREMENT_Ban_OSG = (Owner != \"\") SUBMIT_REQUIREMENT_Ban_OSG_REASON = \"OSG pilot job submission temporarily disabled\" SUBMIT_REQUIREMENT_NAMES = $(SUBMIT_REQUIREMENT_NAMES) Ban_OSG Replacing with the name of the local Unix account corresponding to the problematic VO. For Slurm batch systems, disable the relevant Slurm partition : [root@host] # scontrol update PartitionName = State = DOWN Replacing with the name of the partition where you are sending OSG jobs. Remove the VO's jobs: For HTCondor batch systems, run the following command on your HTCondor-CE or Access Point accepting jobs from an OSG Hosted CE: [root@access-point] # condor_rm Replacing with the name of the local Unix account corresponding to the problematic VO. For Slurm batch systems, run the following command: [root@host] # scancel -u Replacing with the name of the local Unix account corresponding to the problematic VO. Let us know so that we can track down the offending software or user: the same issue that you're experiencing may also be affecting other sites!","title":"Handle Misbehaving Jobs"},{"location":"site-maintenance/#keep-osg-software-updated","text":"It is important to keep your software and data (e.g., CAs and VO client) up-to-date with the latest OSG release. See the release notes for your installed release series: OSG 3.6 release notes To stay abreast of software releases, we recommend subscribing to the osg-sites@opensciencegrid.org mailing list.","title":"Keep OSG Software Updated"},{"location":"site-maintenance/#notify-osg-of-major-changes","text":"To avoid potential issues with OSG job submissions, please notify us of major changes to your site, including: Major OS version changes on the worker nodes (e.g., upgraded from EL 7 to EL 8) Adding or removing container support through singularity or apptainer Policy changes regarding OSG resource requests (e.g., number of cores or GPUs, memory usage, or maximum walltime) Scheduled or unscheduled downtimes Site topology changes such as additions, modifications, or retirements of OSG services Changes to site contacts, such as administrative or security staff","title":"Notify OSG of Major Changes"},{"location":"site-maintenance/#help","text":"If you need help with your site, or need to report a security incident, follow the contact instructions .","title":"Help"},{"location":"site-planning/","text":"Site Planning \u00b6 The OSG vision is to integrate computing across different resource types and business models to allow campus IT to offer a maximally flexible high throughput computing (HTC) environment for their researchers. This document is for System Administrators and aims to provide an overview of the different options to consider when planning to share resources via the OSG. After reading, you should be able to understand what software or services you want to provide to support your researchers Note This document covers the most common options. OSG is a diverse infrastructure: depending on what groups you want to support, you may need to install additional services. Coordinate with your local researchers. OSG Site Services \u00b6 The OSG Software stack tries to provide a uniform computing and storage fabric across many independently-managed computing and storage resources. These individual services will be accessed by virtual organizations (VOs), which will delegate the resources to scientists, researchers, and students. Sharing is a fundamental principle for the OSG: your site is encouraged to support as many OSG-registered VOs as local conditions allow. Autonomy is another principle: you are not required to support any VOs you do not want. As the administrator, your task is to make your existing computing and storage resources available to and reliable for your supported VOs. We break this down into three tasks: Getting \"pilot jobs\" submitted to your site batch system. Establishing an OSG runtime environment for running jobs. Delivering data to payload applications to be processed. There are multiple approaches for each item, depending on the VOs you support, and time you have to invest in the OSG. Note An essential concept in the OSG is the \"pilot job\". The pilot, which arrives at your batch system, is sent by the VO to get a resource allocation. However, it does not contain any research payload. Once started, it will connect back to a resource pool and pull down individuals' research \"payload jobs\". Hence, we do not think about submitting \"jobs\" to sites but rather \"resource requests\". Pilot Jobs \u00b6 Traditionally, an OSG Compute Entrypoint (CE) provides remote access for VOs to submit pilot jobs to your local batch system . There are two options for accepting pilot jobs at your site: Hosted CE : OSG will run and operate the CE services at no cost; the site only needs to provide a SSH pubkey-based authentication access to the central OSG host. OSG will interface with the VO and submit pilots directly to your batch system via SSH. By far, this is the simplest option : however, it is less-scalable and the site delegates many of the scheduling decisions to the OSG. Contact help@osg-htc.org for more information on the hosted CE. OSG CE : The traditional option where the site installs and operates a HTCondor-based CE on a dedicated host. This provides the best scalability and flexibility, but may require an ongoing time investment from the site. The OSG CE install and operation is covered in this documentation page . There are additional ways that pilots can be started at a site (either by the site administrator or an end-user); see resource sharing for more details. Runtime environment \u00b6 The OSG requires a very minimal runtime environment that can be deployed via tarball , RPM , or through a global filesystem on your cluster's worker nodes. We believe that all research applications should be portable and self-contained, with no OS dependencies. This provides access to the most resources and minimizes the presence at sites. However, this ideal is often difficult to achieve in practice. For sites that want to support a uniform runtime environment, we provide a global filesystem called CVMFS that VOs can use to distribute their own software dependencies. Finally, many researchers use applications that require a specific OS environment - not just individual dependencies - that is distributed as a container. OSG supports the use of the Singularity container runtime with Docker-based image distribution. Data Services \u00b6 Whether accessed through CVMFS or command-line software like curl , the majority of software is moved via HTTP in cache-friendly patterns. All sites are highly encouraged to use an HTTP proxy to reduce the load on the WAN from the cluster. Depending on the VOs you want to support, additional data services may be necessary: Some VOs elect to stream their larger input data from offsite using OSG's Data Federation . User jobs can make use of the OSG Data Federation without any services at your site but you may wish to run one or more of the following services: Data Cache to further reduce load on your connection to the WAN. Data Origin to allow local users to stage their data into the OSG Data Federation. The largest sites will additionally run large-scale data services such as a \"storage element\". This is often required for sites that want to support more complex organizations such as ATLAS or CMS. Site Policies \u00b6 Sites are encouraged to clearly specify and communicate their local policies regarding resource access. One common mechanism to do this is post them on a web page and make this page part of your site registration . Written policies help external entities understand what your site wants to accomplish with the OSG -- and are often internally clarifying. In line of our principle of sharing , we encourage you to allow virtual organizations registered with the OSG \"opportunistic use\" of your resources. You may need to preempt those jobs when higher priority jobs come around. The end-users using the OSG generally prefer having access to your site subject to preemption over having no access at all. Getting Help \u00b6 If you need help with planning your site, follow the contact instructions .","title":"Site Planning"},{"location":"site-planning/#site-planning","text":"The OSG vision is to integrate computing across different resource types and business models to allow campus IT to offer a maximally flexible high throughput computing (HTC) environment for their researchers. This document is for System Administrators and aims to provide an overview of the different options to consider when planning to share resources via the OSG. After reading, you should be able to understand what software or services you want to provide to support your researchers Note This document covers the most common options. OSG is a diverse infrastructure: depending on what groups you want to support, you may need to install additional services. Coordinate with your local researchers.","title":"Site Planning"},{"location":"site-planning/#osg-site-services","text":"The OSG Software stack tries to provide a uniform computing and storage fabric across many independently-managed computing and storage resources. These individual services will be accessed by virtual organizations (VOs), which will delegate the resources to scientists, researchers, and students. Sharing is a fundamental principle for the OSG: your site is encouraged to support as many OSG-registered VOs as local conditions allow. Autonomy is another principle: you are not required to support any VOs you do not want. As the administrator, your task is to make your existing computing and storage resources available to and reliable for your supported VOs. We break this down into three tasks: Getting \"pilot jobs\" submitted to your site batch system. Establishing an OSG runtime environment for running jobs. Delivering data to payload applications to be processed. There are multiple approaches for each item, depending on the VOs you support, and time you have to invest in the OSG. Note An essential concept in the OSG is the \"pilot job\". The pilot, which arrives at your batch system, is sent by the VO to get a resource allocation. However, it does not contain any research payload. Once started, it will connect back to a resource pool and pull down individuals' research \"payload jobs\". Hence, we do not think about submitting \"jobs\" to sites but rather \"resource requests\".","title":"OSG Site Services"},{"location":"site-planning/#pilot-jobs","text":"Traditionally, an OSG Compute Entrypoint (CE) provides remote access for VOs to submit pilot jobs to your local batch system . There are two options for accepting pilot jobs at your site: Hosted CE : OSG will run and operate the CE services at no cost; the site only needs to provide a SSH pubkey-based authentication access to the central OSG host. OSG will interface with the VO and submit pilots directly to your batch system via SSH. By far, this is the simplest option : however, it is less-scalable and the site delegates many of the scheduling decisions to the OSG. Contact help@osg-htc.org for more information on the hosted CE. OSG CE : The traditional option where the site installs and operates a HTCondor-based CE on a dedicated host. This provides the best scalability and flexibility, but may require an ongoing time investment from the site. The OSG CE install and operation is covered in this documentation page . There are additional ways that pilots can be started at a site (either by the site administrator or an end-user); see resource sharing for more details.","title":"Pilot Jobs"},{"location":"site-planning/#runtime-environment","text":"The OSG requires a very minimal runtime environment that can be deployed via tarball , RPM , or through a global filesystem on your cluster's worker nodes. We believe that all research applications should be portable and self-contained, with no OS dependencies. This provides access to the most resources and minimizes the presence at sites. However, this ideal is often difficult to achieve in practice. For sites that want to support a uniform runtime environment, we provide a global filesystem called CVMFS that VOs can use to distribute their own software dependencies. Finally, many researchers use applications that require a specific OS environment - not just individual dependencies - that is distributed as a container. OSG supports the use of the Singularity container runtime with Docker-based image distribution.","title":"Runtime environment"},{"location":"site-planning/#data-services","text":"Whether accessed through CVMFS or command-line software like curl , the majority of software is moved via HTTP in cache-friendly patterns. All sites are highly encouraged to use an HTTP proxy to reduce the load on the WAN from the cluster. Depending on the VOs you want to support, additional data services may be necessary: Some VOs elect to stream their larger input data from offsite using OSG's Data Federation . User jobs can make use of the OSG Data Federation without any services at your site but you may wish to run one or more of the following services: Data Cache to further reduce load on your connection to the WAN. Data Origin to allow local users to stage their data into the OSG Data Federation. The largest sites will additionally run large-scale data services such as a \"storage element\". This is often required for sites that want to support more complex organizations such as ATLAS or CMS.","title":"Data Services"},{"location":"site-planning/#site-policies","text":"Sites are encouraged to clearly specify and communicate their local policies regarding resource access. One common mechanism to do this is post them on a web page and make this page part of your site registration . Written policies help external entities understand what your site wants to accomplish with the OSG -- and are often internally clarifying. In line of our principle of sharing , we encourage you to allow virtual organizations registered with the OSG \"opportunistic use\" of your resources. You may need to preempt those jobs when higher priority jobs come around. The end-users using the OSG generally prefer having access to your site subject to preemption over having no access at all.","title":"Site Policies"},{"location":"site-planning/#getting-help","text":"If you need help with planning your site, follow the contact instructions .","title":"Getting Help"},{"location":"site-verification/","text":"Site Verification \u00b6 After installing and registering services from the site planning document , you will need to perform some verification steps before your site can scale up to full production . Verify OSG Software \u00b6 To verify your site's installation of OSG Software, you will need to: Submit local test jobs Contact the OSG for end-to-end tests of pilot job submission Check that OSG usage is reported to the GRACC Local verification \u00b6 It is useful to submit jobs from within your site to verify CE's ability to submit jobs to your local batch system. Consult the document for submitting jobs into an HTCondor-CE for detailed instructions on how to test job submission. Verify end-to-end pilot job submission \u00b6 Once you have validated job submission from within your site, request test pilot jobs from OSG Factory Operations and provide the following information: The fully qualified domain name of the CE Registered OSG resource name Supported OS version of your worker nodes (e.g., EL7, EL8, or a combination) Support for multicore jobs Support for GPUs Maximum job walltime Maximum job memory usage Once the Factory Operations team has enough information, they will start submitting pilots to your CE. Initially, this will be a handful of pilots at a time but once the factory verifies that pilot jobs are running successfully, that number will be ramped up. Verify reporting and monitoring \u00b6 To verify that your site is correctly reporting to the OSG, visit OSG's Accounting Portal and select your registered OSG site name from the Site dropdown. If you don't see your site in the dropdown, please contact us for assistance . Scale Up to Full Production \u00b6 After verifying end-to-end pilot job submission and usage reporting, your site is ready for production! In the same OSG Factory Operations ticket that you opened above , let OSG staff know when you are ready to accept production pilots. After requesting production pilots, review the documentation for how to maintain an OSG site . Getting Help \u00b6 If you need help with your site, or need to report a security incident, follow the contact instructions .","title":"Site Verification"},{"location":"site-verification/#site-verification","text":"After installing and registering services from the site planning document , you will need to perform some verification steps before your site can scale up to full production .","title":"Site Verification"},{"location":"site-verification/#verify-osg-software","text":"To verify your site's installation of OSG Software, you will need to: Submit local test jobs Contact the OSG for end-to-end tests of pilot job submission Check that OSG usage is reported to the GRACC","title":"Verify OSG Software"},{"location":"site-verification/#local-verification","text":"It is useful to submit jobs from within your site to verify CE's ability to submit jobs to your local batch system. Consult the document for submitting jobs into an HTCondor-CE for detailed instructions on how to test job submission.","title":"Local verification"},{"location":"site-verification/#verify-end-to-end-pilot-job-submission","text":"Once you have validated job submission from within your site, request test pilot jobs from OSG Factory Operations and provide the following information: The fully qualified domain name of the CE Registered OSG resource name Supported OS version of your worker nodes (e.g., EL7, EL8, or a combination) Support for multicore jobs Support for GPUs Maximum job walltime Maximum job memory usage Once the Factory Operations team has enough information, they will start submitting pilots to your CE. Initially, this will be a handful of pilots at a time but once the factory verifies that pilot jobs are running successfully, that number will be ramped up.","title":"Verify end-to-end pilot job submission"},{"location":"site-verification/#verify-reporting-and-monitoring","text":"To verify that your site is correctly reporting to the OSG, visit OSG's Accounting Portal and select your registered OSG site name from the Site dropdown. If you don't see your site in the dropdown, please contact us for assistance .","title":"Verify reporting and monitoring"},{"location":"site-verification/#scale-up-to-full-production","text":"After verifying end-to-end pilot job submission and usage reporting, your site is ready for production! In the same OSG Factory Operations ticket that you opened above , let OSG staff know when you are ready to accept production pilots. After requesting production pilots, review the documentation for how to maintain an OSG site .","title":"Scale Up to Full Production"},{"location":"site-verification/#getting-help","text":"If you need help with your site, or need to report a security incident, follow the contact instructions .","title":"Getting Help"},{"location":"common/ca/","text":"Installing Certificate Authorities (CAs) \u00b6 The certificate authorities (CAs) provide the trust roots for the public key infrastructure OSG uses to maintain integrity of its sites and services. This document provides details of various options to install the Certificate Authority (CA) certificates and have up-to-date certificate revocation lists (CRLs) on your OSG hosts. We provide three options for installing CA certificates that offer varying levels of control: Install an RPM for a specific set of CA certificates ( default ) Install osg-ca-scripts , a set of scripts that provide fine-grained CA management Install an RPM that doesn't install any CAs. This is useful if you'd like to manage CAs yourself while satisfying RPM dependencies. Prior to following the instructions on this page, you must enable our yum repositories Installing CA Certificates \u00b6 Please choose one of the three options to install CA certificates. Option 1: Install an RPM for a specific set of CA certificates \u00b6 Note This option is the default if you install OSG software without pre-installing CAs. For example, yum install osg-ce will bring in osg-ca-certs by default. In the OSG repositories, you will find two different sets of predefined CA certificates: ( default ) The OSG CA certificates. This is similar to the IGTF set but may have a small number of additions or deletions The IGTF CA certificates See this page for details of the contents of the OSG CA package. If you chose... Then run the following command... OSG CA certificates yum install osg-ca-certs IGTF CA certificates yum install igtf-ca-certs To automatically keep your RPM installation of CAs up to date, we recommend the OSG CA certificates updater service. Option 2: Install osg-ca-scripts \u00b6 The osg-ca-scripts package provides scripts to install and update predefined sets of CAs with the ability to add or remove specific CAs. The OSG CA certificates. This is similar to the IGTF set but may have a small number of additions or deletions The IGTF CA certificates See this page for details of the contents of the OSG CA package. Install the osg-ca-scripts package: root@host # yum install osg-ca-scripts Choose and install the CA certificate set: If you choose... Then run the following command... OSG CA certificates osg-ca-manage setupCA --location root --url osg IGTF CA certificates osg-ca-manage setupCA --location root --url igtf Enable the osg-update-certs-cron service to enable periodic CA updates. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable (Optional) To add a new CA: osg-ca-manage add [--dir ] --hash (Optional) To remove a CA osg-ca-manage remove --hash A complete set of options available though osg-ca-manage command, can be found in the osg-ca-manage documentation Option 3: Site-managed CAs \u00b6 If you want to handle the list of CAs completely internally to your site, you can utilize the empty-ca-certs RPM to satisfy RPM dependencies while not actually installing any CAs. To install this RPM, run the following command: root@host # yum install empty-ca-certs \u2013-enablerepo = osg-empty Warning If you choose this option, you are responsible for installing and maintaining the CA certificates. They must be installed in /etc/grid-security/certificates , or a symlink must be made from that location to the directory that contains the CA certificates. Installing other CAs \u00b6 In addition to the above CAs, you can install other CAs via RPM. These only work with the RPMs that provide CAs (that is, osg-ca-certs and the like, but not osg-ca-scripts .) They are in addition to the above RPMs, so do not only install these extra CAs. Set of CAs RPM name Installation command (as root) cilogon-openid cilogon-openid-ca-cert yum install cilogon-openid-ca-cert Verifying CA Certificates \u00b6 After installing or updating the CA certificates, they can be verified with the following command: root@host # curl --cacert \\ --capath \\ -o /dev/null \\ https://gracc.opensciencegrid.org \\ && echo \"CA certificate installation verified\" Where is the path to a valid X.509 CA certificate and is the path to the directory containing the installed CA certificates. For example, the following command can be used to verify a default OSG CA certificate installation: root@host # curl --cacert /etc/grid-security/certificates/cilogon-osg.pem \\ --capath /etc/grid-security/certificates/ \\ -o /dev/null \\ https://gracc.opensciencegrid.org \\ && echo \"CA certificate installation verified\" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 22005 0 22005 0 0 86633 0 --:--:-- --:--:-- --:--:-- 499k CA certificate installation verified If you do not see CA certificate installation verified this means that your CA certificate installation is broken. First, ensure that your CA installation is up-to-date and if you continue to see issues please contact us . Keeping CA Certificates Up-to-date \u00b6 It is important to keep CA certificates up-to-date for services and their clients to maintain integrity of production services. To verify that your CA certificates are on the latest version on a given host, determine the most recently released versions and the method by which your CA certificates have been installed: Retrieve the versions of the most recently released IGTF CA certificates and OSG CA certificates Determine which of the three CA certificate installation methods you are using: # rpm -q igtf-ca-certs osg-ca-certs osg-ca-scripts empty-ca-certs Based on which package is installed from the output in the previous step, choose one of the following options: If igtf-ca-certs or osg-ca-certs is installed , compare the installed version from step 2 to the corresponding version from step 1. If the version is older than the corresponding version from step 1, continue onto option 1 to upgrade your current installation and keep your installation up-to-date. If the versions match, your CA certificates are up-to-date! If osg-ca-scripts is installed , run the following command to update your CA certificates: # osg-ca-manage refreshCA And continue to the instructions in option 2 to enable automatic updates of your CA certificates. If empty-ca-scripts is installed , then you are responsible for maintaining your own CA certificates as outlined in option 3 . If none of the packages are installed , your host likely does not need CA certificates and you are done. Managing Certificate Revocation Lists \u00b6 In addition to CA certificates, you must have updated Certificate Revocation Lists (CRLs). CRLs contain certificate blacklists that OSG software uses to ensure that your hosts are only talking to valid clients or servers. To maintain up to date CAs, you will need to run the fetch-crl services. Note Normally fetch-crl is installed when you install the rest of the software and you do not need to explicitly install it. If you do wish to install it manually, run the following command: root@host # yum install fetch-crl If you do not wish to change the frequency of fetch-crl updates (default: every 6 hours) or use syslog for fetch-crl output, skip to the service management section Optional: configuring fetch-crl \u00b6 The following sub-sections contain optional configuration instructions. Note Note that the nosymlinks option in the configuration files refers to ignoring links within the certificates directory (e.g. two different names for the same file). It is perfectly fine if the path of the CA certificates directory itself ( infodir ) is a link to a directory. Changing the frequency of fetch-crl-cron \u00b6 To modify the times that fetch-crl-cron runs, edit /etc/cron.d/fetch-crl . Logging with syslog \u00b6 fetch-crl can produce quite a bit of output when run in verbose mode. To send fetch-crl output to syslog, use the following instructions: Change the configuration file to enable syslog: logmode = syslog syslogfacility = daemon Make sure the file /var/log/daemon exists, e.g. touching the file Change /etc/logrotate.d files to rotate it Managing fetch-crl services \u00b6 fetch-crl is installed as two different system services. The fetch-crl-boot service runs fetch-crl and is intended to only be enabled or disabled. The fetch-crl-cron service runs fetch-crl every 6 hours (with a random sleep time included). Both services are disabled by default. At the very minimum, the fetch-crl-cron service needs to be enabled and started, otherwise services will begin to fail as existing CRLs expire. Software Service name Notes Fetch CRL fetch-crl.timer (EL8-only) Runs fetch-crl every 6 hours and on boot fetch-crl-cron (EL7-only) Runs fetch-crl every 6 hours fetch-crl-boot (EL7-only) Runs fetch-crl immediately and on boot Start the services in the order listed and stop them in reverse order. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable Getting Help \u00b6 To get assistance, please use the this page . References \u00b6 Some guides on X.509 certificates: Useful commands: http://security.ncsa.illinois.edu/research/grid-howtos/usefulopenssl.html Install GSI authentication on a server: http://security.ncsa.illinois.edu/research/wssec/gsihttps/ Certificates how-to: http://www.nordugrid.org/documents/certificate_howto.html See this page for examples of verifying certificates. Related software: osg-ca-manage osg-ca-certs-updater Configuration files \u00b6 Package File Description Location Comment All CA Packages CA File Location /etc/grid-security/certificates All CA Packages Index files /etc/grid-security/certificates/INDEX.html or /etc/grid-security/certificates/INDEX.txt Latest version also available at http://repo.opensciencegrid.org/cadist/ All CA Packages Change Log /etc/grid-security/certificates/CHANGES Latest version also available at http://repo.opensciencegrid.org/cadist/CHANGES osg-ca-certs or igtf-ca-certs contain only CA files osg-ca-scripts Configuration File for osg-update-certs /etc/osg/osg-update-certs.conf This file may be edited by hand, though it is recommended to use osg-ca-manage to set configuration parameters. fetch-crl-3.x Configuration file /etc/fetch-crl.conf The index and change log files contain a summary of all the CA distributed and their version. Logs files \u00b6 Package File Description Location osg-ca-scripts Log file of osg-update-certs /var/log/osg-update-certs.log osg-ca-scripts Stdout of osg-update-certs /var/log/osg-ca-certs-status.system.out osg-ca-scripts Stdout of osg-ca-manage /var/log/osg-ca-manage.system.out osg-ca-scripts Stdout of initial CA setup /var/log/osg-setup-ca-certificates.system.out","title":"Overview"},{"location":"common/ca/#installing-certificate-authorities-cas","text":"The certificate authorities (CAs) provide the trust roots for the public key infrastructure OSG uses to maintain integrity of its sites and services. This document provides details of various options to install the Certificate Authority (CA) certificates and have up-to-date certificate revocation lists (CRLs) on your OSG hosts. We provide three options for installing CA certificates that offer varying levels of control: Install an RPM for a specific set of CA certificates ( default ) Install osg-ca-scripts , a set of scripts that provide fine-grained CA management Install an RPM that doesn't install any CAs. This is useful if you'd like to manage CAs yourself while satisfying RPM dependencies. Prior to following the instructions on this page, you must enable our yum repositories","title":"Installing Certificate Authorities (CAs)"},{"location":"common/ca/#installing-ca-certificates","text":"Please choose one of the three options to install CA certificates.","title":"Installing CA Certificates"},{"location":"common/ca/#option-1-install-an-rpm-for-a-specific-set-of-ca-certificates","text":"Note This option is the default if you install OSG software without pre-installing CAs. For example, yum install osg-ce will bring in osg-ca-certs by default. In the OSG repositories, you will find two different sets of predefined CA certificates: ( default ) The OSG CA certificates. This is similar to the IGTF set but may have a small number of additions or deletions The IGTF CA certificates See this page for details of the contents of the OSG CA package. If you chose... Then run the following command... OSG CA certificates yum install osg-ca-certs IGTF CA certificates yum install igtf-ca-certs To automatically keep your RPM installation of CAs up to date, we recommend the OSG CA certificates updater service.","title":"Option 1: Install an RPM for a specific set of CA certificates"},{"location":"common/ca/#option-2-install-osg-ca-scripts","text":"The osg-ca-scripts package provides scripts to install and update predefined sets of CAs with the ability to add or remove specific CAs. The OSG CA certificates. This is similar to the IGTF set but may have a small number of additions or deletions The IGTF CA certificates See this page for details of the contents of the OSG CA package. Install the osg-ca-scripts package: root@host # yum install osg-ca-scripts Choose and install the CA certificate set: If you choose... Then run the following command... OSG CA certificates osg-ca-manage setupCA --location root --url osg IGTF CA certificates osg-ca-manage setupCA --location root --url igtf Enable the osg-update-certs-cron service to enable periodic CA updates. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable (Optional) To add a new CA: osg-ca-manage add [--dir ] --hash (Optional) To remove a CA osg-ca-manage remove --hash A complete set of options available though osg-ca-manage command, can be found in the osg-ca-manage documentation","title":"Option 2: Install osg-ca-scripts"},{"location":"common/ca/#option-3-site-managed-cas","text":"If you want to handle the list of CAs completely internally to your site, you can utilize the empty-ca-certs RPM to satisfy RPM dependencies while not actually installing any CAs. To install this RPM, run the following command: root@host # yum install empty-ca-certs \u2013-enablerepo = osg-empty Warning If you choose this option, you are responsible for installing and maintaining the CA certificates. They must be installed in /etc/grid-security/certificates , or a symlink must be made from that location to the directory that contains the CA certificates.","title":"Option 3: Site-managed CAs"},{"location":"common/ca/#installing-other-cas","text":"In addition to the above CAs, you can install other CAs via RPM. These only work with the RPMs that provide CAs (that is, osg-ca-certs and the like, but not osg-ca-scripts .) They are in addition to the above RPMs, so do not only install these extra CAs. Set of CAs RPM name Installation command (as root) cilogon-openid cilogon-openid-ca-cert yum install cilogon-openid-ca-cert","title":"Installing other CAs"},{"location":"common/ca/#verifying-ca-certificates","text":"After installing or updating the CA certificates, they can be verified with the following command: root@host # curl --cacert \\ --capath \\ -o /dev/null \\ https://gracc.opensciencegrid.org \\ && echo \"CA certificate installation verified\" Where is the path to a valid X.509 CA certificate and is the path to the directory containing the installed CA certificates. For example, the following command can be used to verify a default OSG CA certificate installation: root@host # curl --cacert /etc/grid-security/certificates/cilogon-osg.pem \\ --capath /etc/grid-security/certificates/ \\ -o /dev/null \\ https://gracc.opensciencegrid.org \\ && echo \"CA certificate installation verified\" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 22005 0 22005 0 0 86633 0 --:--:-- --:--:-- --:--:-- 499k CA certificate installation verified If you do not see CA certificate installation verified this means that your CA certificate installation is broken. First, ensure that your CA installation is up-to-date and if you continue to see issues please contact us .","title":"Verifying CA Certificates"},{"location":"common/ca/#keeping-ca-certificates-up-to-date","text":"It is important to keep CA certificates up-to-date for services and their clients to maintain integrity of production services. To verify that your CA certificates are on the latest version on a given host, determine the most recently released versions and the method by which your CA certificates have been installed: Retrieve the versions of the most recently released IGTF CA certificates and OSG CA certificates Determine which of the three CA certificate installation methods you are using: # rpm -q igtf-ca-certs osg-ca-certs osg-ca-scripts empty-ca-certs Based on which package is installed from the output in the previous step, choose one of the following options: If igtf-ca-certs or osg-ca-certs is installed , compare the installed version from step 2 to the corresponding version from step 1. If the version is older than the corresponding version from step 1, continue onto option 1 to upgrade your current installation and keep your installation up-to-date. If the versions match, your CA certificates are up-to-date! If osg-ca-scripts is installed , run the following command to update your CA certificates: # osg-ca-manage refreshCA And continue to the instructions in option 2 to enable automatic updates of your CA certificates. If empty-ca-scripts is installed , then you are responsible for maintaining your own CA certificates as outlined in option 3 . If none of the packages are installed , your host likely does not need CA certificates and you are done.","title":"Keeping CA Certificates Up-to-date"},{"location":"common/ca/#managing-certificate-revocation-lists","text":"In addition to CA certificates, you must have updated Certificate Revocation Lists (CRLs). CRLs contain certificate blacklists that OSG software uses to ensure that your hosts are only talking to valid clients or servers. To maintain up to date CAs, you will need to run the fetch-crl services. Note Normally fetch-crl is installed when you install the rest of the software and you do not need to explicitly install it. If you do wish to install it manually, run the following command: root@host # yum install fetch-crl If you do not wish to change the frequency of fetch-crl updates (default: every 6 hours) or use syslog for fetch-crl output, skip to the service management section","title":"Managing Certificate Revocation Lists"},{"location":"common/ca/#optional-configuring-fetch-crl","text":"The following sub-sections contain optional configuration instructions. Note Note that the nosymlinks option in the configuration files refers to ignoring links within the certificates directory (e.g. two different names for the same file). It is perfectly fine if the path of the CA certificates directory itself ( infodir ) is a link to a directory.","title":"Optional: configuring fetch-crl"},{"location":"common/ca/#changing-the-frequency-of-fetch-crl-cron","text":"To modify the times that fetch-crl-cron runs, edit /etc/cron.d/fetch-crl .","title":"Changing the frequency of fetch-crl-cron"},{"location":"common/ca/#logging-with-syslog","text":"fetch-crl can produce quite a bit of output when run in verbose mode. To send fetch-crl output to syslog, use the following instructions: Change the configuration file to enable syslog: logmode = syslog syslogfacility = daemon Make sure the file /var/log/daemon exists, e.g. touching the file Change /etc/logrotate.d files to rotate it","title":"Logging with syslog"},{"location":"common/ca/#managing-fetch-crl-services","text":"fetch-crl is installed as two different system services. The fetch-crl-boot service runs fetch-crl and is intended to only be enabled or disabled. The fetch-crl-cron service runs fetch-crl every 6 hours (with a random sleep time included). Both services are disabled by default. At the very minimum, the fetch-crl-cron service needs to be enabled and started, otherwise services will begin to fail as existing CRLs expire. Software Service name Notes Fetch CRL fetch-crl.timer (EL8-only) Runs fetch-crl every 6 hours and on boot fetch-crl-cron (EL7-only) Runs fetch-crl every 6 hours fetch-crl-boot (EL7-only) Runs fetch-crl immediately and on boot Start the services in the order listed and stop them in reverse order. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable ","title":"Managing fetch-crl services"},{"location":"common/ca/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"common/ca/#references","text":"Some guides on X.509 certificates: Useful commands: http://security.ncsa.illinois.edu/research/grid-howtos/usefulopenssl.html Install GSI authentication on a server: http://security.ncsa.illinois.edu/research/wssec/gsihttps/ Certificates how-to: http://www.nordugrid.org/documents/certificate_howto.html See this page for examples of verifying certificates. Related software: osg-ca-manage osg-ca-certs-updater","title":"References"},{"location":"common/ca/#configuration-files","text":"Package File Description Location Comment All CA Packages CA File Location /etc/grid-security/certificates All CA Packages Index files /etc/grid-security/certificates/INDEX.html or /etc/grid-security/certificates/INDEX.txt Latest version also available at http://repo.opensciencegrid.org/cadist/ All CA Packages Change Log /etc/grid-security/certificates/CHANGES Latest version also available at http://repo.opensciencegrid.org/cadist/CHANGES osg-ca-certs or igtf-ca-certs contain only CA files osg-ca-scripts Configuration File for osg-update-certs /etc/osg/osg-update-certs.conf This file may be edited by hand, though it is recommended to use osg-ca-manage to set configuration parameters. fetch-crl-3.x Configuration file /etc/fetch-crl.conf The index and change log files contain a summary of all the CA distributed and their version.","title":"Configuration files"},{"location":"common/ca/#logs-files","text":"Package File Description Location osg-ca-scripts Log file of osg-update-certs /var/log/osg-update-certs.log osg-ca-scripts Stdout of osg-update-certs /var/log/osg-ca-certs-status.system.out osg-ca-scripts Stdout of osg-ca-manage /var/log/osg-ca-manage.system.out osg-ca-scripts Stdout of initial CA setup /var/log/osg-setup-ca-certificates.system.out","title":"Logs files"},{"location":"common/contact-registration/","text":"Registering Contact Information \u00b6 OSG staff keep track of contact information for OSG Consortium participants to provide access to OSG services, notify administrators and security contacts of software and security updates, and coordinate in case of security incidents or troubleshooting services. The OSG contact management service is backed by InCommon federation , meaning that contacts may register with the OSG using their institutional identities with familiar Single Sign-On forms. Privacy Notice The OSG treats any email addresses and phone numbers as confidential data but does not make any guarantees of privacy. All other data is public (such as name, GitHub username, and any association with particular services or collaborations). How do I register a mailing list? If you would like to register a mailing list as a contact for your site, please contact us directly . Submitting an Application \u00b6 To register with the OSG, submit an application using the self-signup process: Visit https://osg-htc.org/register You will be presented with a Single-Sign On page. Select your insitution and sign in with your insitutional credentials: Help, my institution does not show up in the drop-down! If your institution does not show up in the drop-down menu, then your institution is not part of the InCommon federation . In this case, we recommend using an ORCID account instead, registering a new one if necessary. After you have signed in, you will be presented with the self-signup form. Click the \"BEGIN\" button: Enter your name, email address, GitHub username (optional), and a comment describing why you are registering as a participant in the OSG Consortium. Your institution may provide defaults for your name and email address but you may override these values. Once you have updated all the fields to your liking, click the \"SUBMIT\" button: Verifying Your Email Address \u00b6 After submitting your registration application, you will receive an email from registry@cilogon.org to verify your email address. Follow the link in the email and click the \"Accept\" button to complete the verification: Wait for URL redirection After clicking the email verification link, be sure to let the page to completely load (you will be redirected back to this page), otherwise you may have issues completing your registration. If you believe this has happened to you, please contact us for assistance. Help, my email verification link has expired! If the email verification link has expired, please contact us to request a new verification link. Waiting for Approval \u00b6 After verifying your email address, your registration application must be approved by OSG staff. Once your registration application has been approved, you will receive a confirmation email: Once you have received your confirmation email, you may start using OSG services such as registering your resources . OASIS Managers: Adding an SSH Key \u00b6 After approval by OSG staff, OASIS managers must upload a public SSH key before being able to access the OASIS login host: Visit https://osg-htc.org/register and login if prompted Click your name in the top right to get a dropdown and click the My Profile (OSG) button On the right-side of your profile, click the Authenticators link: On the authenticators page, click the Manage button: On the SSH keys page, click the Add SSH Key link: Finally, upload your public SSH key from your computer: Getting Help \u00b6 For assistance with the OSG contact registration process, please use this page .","title":"Contact Information"},{"location":"common/contact-registration/#registering-contact-information","text":"OSG staff keep track of contact information for OSG Consortium participants to provide access to OSG services, notify administrators and security contacts of software and security updates, and coordinate in case of security incidents or troubleshooting services. The OSG contact management service is backed by InCommon federation , meaning that contacts may register with the OSG using their institutional identities with familiar Single Sign-On forms. Privacy Notice The OSG treats any email addresses and phone numbers as confidential data but does not make any guarantees of privacy. All other data is public (such as name, GitHub username, and any association with particular services or collaborations). How do I register a mailing list? If you would like to register a mailing list as a contact for your site, please contact us directly .","title":"Registering Contact Information"},{"location":"common/contact-registration/#submitting-an-application","text":"To register with the OSG, submit an application using the self-signup process: Visit https://osg-htc.org/register You will be presented with a Single-Sign On page. Select your insitution and sign in with your insitutional credentials: Help, my institution does not show up in the drop-down! If your institution does not show up in the drop-down menu, then your institution is not part of the InCommon federation . In this case, we recommend using an ORCID account instead, registering a new one if necessary. After you have signed in, you will be presented with the self-signup form. Click the \"BEGIN\" button: Enter your name, email address, GitHub username (optional), and a comment describing why you are registering as a participant in the OSG Consortium. Your institution may provide defaults for your name and email address but you may override these values. Once you have updated all the fields to your liking, click the \"SUBMIT\" button:","title":"Submitting an Application"},{"location":"common/contact-registration/#verifying-your-email-address","text":"After submitting your registration application, you will receive an email from registry@cilogon.org to verify your email address. Follow the link in the email and click the \"Accept\" button to complete the verification: Wait for URL redirection After clicking the email verification link, be sure to let the page to completely load (you will be redirected back to this page), otherwise you may have issues completing your registration. If you believe this has happened to you, please contact us for assistance. Help, my email verification link has expired! If the email verification link has expired, please contact us to request a new verification link.","title":"Verifying Your Email Address"},{"location":"common/contact-registration/#waiting-for-approval","text":"After verifying your email address, your registration application must be approved by OSG staff. Once your registration application has been approved, you will receive a confirmation email: Once you have received your confirmation email, you may start using OSG services such as registering your resources .","title":"Waiting for Approval"},{"location":"common/contact-registration/#oasis-managers-adding-an-ssh-key","text":"After approval by OSG staff, OASIS managers must upload a public SSH key before being able to access the OASIS login host: Visit https://osg-htc.org/register and login if prompted Click your name in the top right to get a dropdown and click the My Profile (OSG) button On the right-side of your profile, click the Authenticators link: On the authenticators page, click the Manage button: On the SSH keys page, click the Add SSH Key link: Finally, upload your public SSH key from your computer:","title":"OASIS Managers: Adding an SSH Key"},{"location":"common/contact-registration/#getting-help","text":"For assistance with the OSG contact registration process, please use this page .","title":"Getting Help"},{"location":"common/help/","text":"How to Get Help \u00b6 This page is aimed at OSG site administrators looking for support. Help for OSG users can be found at our support desk . Security Incidents \u00b6 Security incidents can be reported by following the instructions on the Incident Discovery and Reporting page. Software or Service Support \u00b6 If you are experiencing issues with OSG software or services, please consult the following resources before opening a support inquiry: Troubleshooting sections or pages for the problematic software Recent OSG Software release notes OSG 23 OSG 3.6 Outage information for OSG services Submitting support inquiries \u00b6 If your problem still hasn't been resolved by consulting the resources above, please submit a support inquiry with the information noted below: If you came to this page from an installation guide, please provide the following information: Commands and output from any Troubleshooting sections or pages The OSG system profile ( osg-profile.txt ), generated by running the following command: root@host # osg-system-profiler Submit a support inquiry to the system based on the VOs that you are associated with: If you are primarily associated with... Submit new tickets to... LHC VOs GGUS Anyone else help@osg-htc.org Community-specific support \u00b6 Some OSG VOs have dedicated forums or mechanisms for community-specific support. If your VO provides user support, that should be a user's first line of support because the VO is most familiar with your applications and requirements. The list of support centers for OSG VOs can be found in the here . Resources for CMS sites: http://www.uscms.org/uscms_at_work/physics/computing/grid/index.shtml CMS Hyper News: https://hypernews.cern.ch/HyperNews/CMS/get/osg-tier3.html CMS Twiki: https://twiki.cern.ch/twiki/bin/viewauth/CMS/USTier3Computing","title":"Help / Security Incidents"},{"location":"common/help/#how-to-get-help","text":"This page is aimed at OSG site administrators looking for support. Help for OSG users can be found at our support desk .","title":"How to Get Help"},{"location":"common/help/#security-incidents","text":"Security incidents can be reported by following the instructions on the Incident Discovery and Reporting page.","title":"Security Incidents"},{"location":"common/help/#software-or-service-support","text":"If you are experiencing issues with OSG software or services, please consult the following resources before opening a support inquiry: Troubleshooting sections or pages for the problematic software Recent OSG Software release notes OSG 23 OSG 3.6 Outage information for OSG services","title":"Software or Service Support"},{"location":"common/help/#submitting-support-inquiries","text":"If your problem still hasn't been resolved by consulting the resources above, please submit a support inquiry with the information noted below: If you came to this page from an installation guide, please provide the following information: Commands and output from any Troubleshooting sections or pages The OSG system profile ( osg-profile.txt ), generated by running the following command: root@host # osg-system-profiler Submit a support inquiry to the system based on the VOs that you are associated with: If you are primarily associated with... Submit new tickets to... LHC VOs GGUS Anyone else help@osg-htc.org","title":"Submitting support inquiries"},{"location":"common/help/#community-specific-support","text":"Some OSG VOs have dedicated forums or mechanisms for community-specific support. If your VO provides user support, that should be a user's first line of support because the VO is most familiar with your applications and requirements. The list of support centers for OSG VOs can be found in the here . Resources for CMS sites: http://www.uscms.org/uscms_at_work/physics/computing/grid/index.shtml CMS Hyper News: https://hypernews.cern.ch/HyperNews/CMS/get/osg-tier3.html CMS Twiki: https://twiki.cern.ch/twiki/bin/viewauth/CMS/USTier3Computing","title":"Community-specific support"},{"location":"common/registration/","text":"Registering with the OSG Consortium \u00b6 OSG staff keeps a registry containing active projects, collaborations (a.k.a. virtual organizations or VOs), resources, and resource downtimes stored as YAML files in the topology GitHub repository . This registry is used for accounting data , contact information, and resource availability. Use this page to learn how to register information in the OSG Consortium. Registration Requirements \u00b6 The instructions in this document require the following: A GitHub account A working knowledge of GitHub collaboration OSG contact registration Registering Contacts \u00b6 OSG staff keep track of contact information for OSG Consortium participants to provide access to OSG services, notify administrators and security contacts of software and security updates, and coordinating in case of security incidents or troubleshooting services. To register your contact information with the OSG Consortium, follow the instructions in this document . Privacy Notice The OSG treats any email addresses and phone numbers as confidential data but does not make any guarantees of privacy. All other data is public (such as name, GitHub username, and any association with particular services or collaborations). Registering Resources \u00b6 An OSG resource is a host that provides services to OSG campuses and collaborations; some examples are Compute Entrypoints, storage endpoints, or perfSONAR hosts. See the full list of services that should be registered in the OSG topology here . OSG resources are stored under a hierarchy of facilities, sites, and resource groups, defined as follows: Facility : The institution or company name where your resource is located. Site : Smaller than a facility; typically represents a computing center or an academic department. Frequently used as the display name for accounting dashboards . Resource Group : A logical grouping of resources at a site, i.e. all resources associated with a specific computing cluster. Multi-resource downtimes are easiest to declare across a resource group. Production and testing resources must be placed into separate resource groups. Resource : A host that provides services, e.g. Compute Entrypoints, storage endpoints, or perfSONAR hosts. Throughout this document, you will be asked to substitute your own facility, site, resource group, and resource names when registering with the OSG. If you don't already know the relevant names for your resource, using the following naming conventions: Level Naming convention Facility Unabbreviated institution or company name, e.g. University of Wisconsin - Madison Site Computing center or academic department, e.g. CHTC , MWT2 ATLAS UC , San Diego Supercomputer Center The only characters allowed in Site names are letters, numbers, underscores, hyphens, and spaces; i.e., a Site name must match the regular expression ^[A-Za-z0-9_ -]+$ Resource Group Abbreviated facility, site, and cluster name. Resource groups used for testing purposes should have an -ITB or - ITB suffix, e.g. TCNJ-ELSA-ITB Resource In all capital letters, -- , for example: TCNJ-ELSA-CE or NMSU-AGGIE-GRID-SQUID If you don't know which VO to use, pick OSG . OSG resources are stored in the GitHub repository as YAML files under a directory structure that reflects the above hierarchy, i.e. topology///.yaml from the root of the topology repository . New site \u00b6 To register a site, first choose a name for it (see the naming conventions table above ) The site name will appear in OSG accounting in places such as the GRACC site dashboard . Once you have chosen a site name, open the following in your browser: https://github.com/opensciencegrid/topology/new/master?filename=topology///SITE.yaml (replacing and with the facility and the site name that you chose ). \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the site template as a guide. You may leave the ID field blank. When adding new entries, make sure that the formatting and indentation of your entry matches that of the template. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Adding AggieGrid cluster for New Mexico State Searching for resources \u00b6 Whether you are registering a new resource or modifying an existing resource, start by searching for the FQDN of your host to avoid any duplicate registrations: Open the topology repository in your browser. Search the repository for the FQDN of your resource wrapped in double-quotes using the GitHub search bar (e.g., \"glidein2.chtc.wisc.edu\" ): If the search doesn't return any results , skip to these instructions for registering a new resource. If the search returns a single YAML file , open the link to the YAML file and skip to these instructions for modifying existing resources. If the search returns more than one YAML file , please contact us . Note If you are adding a new service to a host which is already registered as a resource, follow the instructions for modifying existing resources. New resources \u00b6 Before registering a new resource, make sure that its FQDN is not already registered . To register a new resource, follow the instructions below: Find the facility, site, and resource group for your resource in the topology repository under this directory structure: topology///.yaml . When searching for these, keep in mind that case and spaces matter. If you do not have a facility, contact help@osg-htc.org for help. If you have a facility but not a site, first follow the instructions for registering a site above. If you have a facility and a site but not a resource group, pick a resource group name . Once you have your facility, site, and resource group, follow the instructions below, replacing instances of , , and with the corresponding names that you chose above : If your resource group already exists under your facility and site, open the following URL in your browser: https://github.com/opensciencegrid/topology/edit/master/topology///.yaml For example, to add a resource to the CHTC resource group for the CHTC site at the University of Wisconsin , open the following URL: https://github.com/opensciencegrid/topology/edit/master/topology/University of Wisconsin/CHTC/CHTC.yaml If your resource group does not exist, open the following URL in your browser: https://github.com/opensciencegrid/topology/new/master?filename=topology///.yaml For example, to create a CHTC-Slurm-HPC resource group for the Center for High Throughput Computing ( CHTC ) at the University of Wisconsin , open the following URL: https://github.com/opensciencegrid/topology/new/master?filename=topology/University of Wisconsin/CHTC/CHTC-Slurm-HPC.yaml \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the resource group template as a guide. You may leave any ID or GroupID fields blank. When adding new entries, make sure that the formatting and indentation of your entry matches that of the template. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Adding a new compute entrypoint to the CHTC Modifying existing resources \u00b6 To modify an existing resource, follow these instructions: Find the resource that you would like to modify by searching GitHub , and open the link to the YAML file. Click the branch selector button next to the file path and select the master branch. Make changes with the GitHub file editor using the resource group template as a guide. You may leave any ID or GroupID fields blank. Make sure that the formatting and indentation of the modified entry does not change. If you are adding a new service to a host that is already registered as a resource, add the new service to the existing resource; do not create a new resource for the same host. !!! note \"\"You're editing a file in a project you don't have write access to.\"\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Updating administrative contact information for CHTC-glidein2 Retiring resources \u00b6 To retire an already registered resource, set Active: false . For example: ... Production: true Resources: GLOW: Active: false ... Services: CE: Description: Compute Entrypoint Details: hidden: false If the Active attribute does not already exist within the resource definition, add it. If your resource becomes available again, set Active: true . Registering Resource Downtimes \u00b6 Resource downtime is a finite period of time for which one or more of the services of a registered resource are unavailable. Warning If you expect your resource to be indefinitely unavailable, retire the resource instead of registering a downtime. Downtimes are stored in YAML files alongside the resource group YAML files as described here . For example, downtimes for resources in the CHTC-Slurm-HPC resource group of the CHTC site at the University of Wisconsin can be found and registered in the following file, relative to the root of the topology repository : topology/University of Wisconsin/CHTC/CHTC-Slurm-HPC_downtime.yaml Note Do not put downtime updates in the same pull request as other topology updates. Registering new downtime \u00b6 To register a new downtime for a resource or for multiples resources that are part of a resource group, you will use webforms to generate the contents of the downtime entry, copy it into the downtime file corresponding to your resource, and submit it as a GitHub pull request. Follow the instructions below: Open one of the downtime generation webforms in your browser: Use the resource downtime generator if you only need to declare a downtime for a single resource. Use the resource group downtime generator if you need to declare a downtime for multiple resources across a resource group. Select your facility, site, resource group, and/or resource from the corresponding lists. For the single resource downtime form: Select all the services that will be down. To select multiple, use Control-Click on Windows and Linux, or Command-Click on macOS. Fill the other fields with information about the downtime. Click the Generate button. If the information is valid, a block of text will be displayed in the box labeled Generated YAML . Otherwise, check for error messages and fix your input. Follow the instructions shown below the generated block of text. \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Wait for OSG staff to approve and merge your new downtime. Modifying existing downtime \u00b6 In case an already registered downtime is incorrect or need to be updated to reflect new information, you can modify existing downtime entries using the GitHub editor. Failure Changes to the ID or CreatedTime fields will be rejected. To modify an existing downtime entry for a registered resource, manually make the changes in the matching downtime YAML file. Follow the instructions below: Open the topology repository in your browser. If you do not know the facility, site, and resource group of the resource the downtime entry refers to, search the repository for the FQDN of your resource wrapped in double-quotes using the GitHub search bar (e.g., \"glidein2.chtc.wisc.edu\" ): If the search returns a single YAML file , note the name of the facility, site, and resource group and continue to the next step. If the search doesn't return any results or returns more than one YAML file , please contact us . Open the following URL in your browser using the facility, site, and resource group names to replace , , and , respectively: https://github.com/opensciencegrid/topology/edit/master/topology///_downtime.yaml \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the downtime template as a reference. Make sure that the formatting and indentation of the modified entry does not change. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Move forward end date for CHTC-glidein2 regular maintenance Wait for OSG staff to approve and merge your modified downtime. Registering Virtual Organizations \u00b6 Virtual Organizations (VOs) are sets of groups or individuals defined by some common cyber-infrastructure need. This can be a scientific experiment, a university campus or a distributed research effort. A VO represents all its members and their common needs in distributed computing environment. A VO also includes the group\u2019s computing/storage resources and services. For more information about VOs, see this page . Info Before submitting a registration for a new VO, please contact us describing your organization's computing needs. VO information is stored as YAML files in the virtual-organizations directory of the topology repository . To modify a VO's information or register a new VO, follow the instructions below: Open the topology repository in your browser. If you see your VO in the list, open the file and continue to the next step. If you do not see your VO in the list, click Create new file button: In the new file dialog, enter .yaml , replacing with the name of your VO. \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the VO template as a guide. You may leave any ID fields blank. If you are modifying existing entries, make sure you do not change formatting or indentation of the modified entry. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Updating contact information for the GLOW VO Registering Projects \u00b6 Info Before submitting a registration for a new project, please contact us describing your organization's computing needs. Project information is stored as YAML files in the projects directory of the topology repository . To modify a VO's information or register a new VO, follow the instructions below: Open the topology repository in your browser. If you see your project in the list, open the file and continue to the next step. If you do not see your project in the list, click Create new file button: In the new file dialog, enter .yaml , replacing with the name of your project. \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the project template as a guide. You may leave any ID fields blank. If you are modifying existing entries, make sure you do not change formatting or indentation of the modified entry. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Updating contact information for the Mu2e project Getting Help \u00b6 To get assistance, please use the this page .","title":"Resources and Collaborations"},{"location":"common/registration/#registering-with-the-osg-consortium","text":"OSG staff keeps a registry containing active projects, collaborations (a.k.a. virtual organizations or VOs), resources, and resource downtimes stored as YAML files in the topology GitHub repository . This registry is used for accounting data , contact information, and resource availability. Use this page to learn how to register information in the OSG Consortium.","title":"Registering with the OSG Consortium"},{"location":"common/registration/#registration-requirements","text":"The instructions in this document require the following: A GitHub account A working knowledge of GitHub collaboration OSG contact registration","title":"Registration Requirements"},{"location":"common/registration/#registering-contacts","text":"OSG staff keep track of contact information for OSG Consortium participants to provide access to OSG services, notify administrators and security contacts of software and security updates, and coordinating in case of security incidents or troubleshooting services. To register your contact information with the OSG Consortium, follow the instructions in this document . Privacy Notice The OSG treats any email addresses and phone numbers as confidential data but does not make any guarantees of privacy. All other data is public (such as name, GitHub username, and any association with particular services or collaborations).","title":"Registering Contacts"},{"location":"common/registration/#registering-resources","text":"An OSG resource is a host that provides services to OSG campuses and collaborations; some examples are Compute Entrypoints, storage endpoints, or perfSONAR hosts. See the full list of services that should be registered in the OSG topology here . OSG resources are stored under a hierarchy of facilities, sites, and resource groups, defined as follows: Facility : The institution or company name where your resource is located. Site : Smaller than a facility; typically represents a computing center or an academic department. Frequently used as the display name for accounting dashboards . Resource Group : A logical grouping of resources at a site, i.e. all resources associated with a specific computing cluster. Multi-resource downtimes are easiest to declare across a resource group. Production and testing resources must be placed into separate resource groups. Resource : A host that provides services, e.g. Compute Entrypoints, storage endpoints, or perfSONAR hosts. Throughout this document, you will be asked to substitute your own facility, site, resource group, and resource names when registering with the OSG. If you don't already know the relevant names for your resource, using the following naming conventions: Level Naming convention Facility Unabbreviated institution or company name, e.g. University of Wisconsin - Madison Site Computing center or academic department, e.g. CHTC , MWT2 ATLAS UC , San Diego Supercomputer Center The only characters allowed in Site names are letters, numbers, underscores, hyphens, and spaces; i.e., a Site name must match the regular expression ^[A-Za-z0-9_ -]+$ Resource Group Abbreviated facility, site, and cluster name. Resource groups used for testing purposes should have an -ITB or - ITB suffix, e.g. TCNJ-ELSA-ITB Resource In all capital letters, -- , for example: TCNJ-ELSA-CE or NMSU-AGGIE-GRID-SQUID If you don't know which VO to use, pick OSG . OSG resources are stored in the GitHub repository as YAML files under a directory structure that reflects the above hierarchy, i.e. topology///.yaml from the root of the topology repository .","title":"Registering Resources"},{"location":"common/registration/#new-site","text":"To register a site, first choose a name for it (see the naming conventions table above ) The site name will appear in OSG accounting in places such as the GRACC site dashboard . Once you have chosen a site name, open the following in your browser: https://github.com/opensciencegrid/topology/new/master?filename=topology///SITE.yaml (replacing and with the facility and the site name that you chose ). \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the site template as a guide. You may leave the ID field blank. When adding new entries, make sure that the formatting and indentation of your entry matches that of the template. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Adding AggieGrid cluster for New Mexico State","title":"New site"},{"location":"common/registration/#searching-for-resources","text":"Whether you are registering a new resource or modifying an existing resource, start by searching for the FQDN of your host to avoid any duplicate registrations: Open the topology repository in your browser. Search the repository for the FQDN of your resource wrapped in double-quotes using the GitHub search bar (e.g., \"glidein2.chtc.wisc.edu\" ): If the search doesn't return any results , skip to these instructions for registering a new resource. If the search returns a single YAML file , open the link to the YAML file and skip to these instructions for modifying existing resources. If the search returns more than one YAML file , please contact us . Note If you are adding a new service to a host which is already registered as a resource, follow the instructions for modifying existing resources.","title":"Searching for resources"},{"location":"common/registration/#new-resources","text":"Before registering a new resource, make sure that its FQDN is not already registered . To register a new resource, follow the instructions below: Find the facility, site, and resource group for your resource in the topology repository under this directory structure: topology///.yaml . When searching for these, keep in mind that case and spaces matter. If you do not have a facility, contact help@osg-htc.org for help. If you have a facility but not a site, first follow the instructions for registering a site above. If you have a facility and a site but not a resource group, pick a resource group name . Once you have your facility, site, and resource group, follow the instructions below, replacing instances of , , and with the corresponding names that you chose above : If your resource group already exists under your facility and site, open the following URL in your browser: https://github.com/opensciencegrid/topology/edit/master/topology///.yaml For example, to add a resource to the CHTC resource group for the CHTC site at the University of Wisconsin , open the following URL: https://github.com/opensciencegrid/topology/edit/master/topology/University of Wisconsin/CHTC/CHTC.yaml If your resource group does not exist, open the following URL in your browser: https://github.com/opensciencegrid/topology/new/master?filename=topology///.yaml For example, to create a CHTC-Slurm-HPC resource group for the Center for High Throughput Computing ( CHTC ) at the University of Wisconsin , open the following URL: https://github.com/opensciencegrid/topology/new/master?filename=topology/University of Wisconsin/CHTC/CHTC-Slurm-HPC.yaml \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the resource group template as a guide. You may leave any ID or GroupID fields blank. When adding new entries, make sure that the formatting and indentation of your entry matches that of the template. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Adding a new compute entrypoint to the CHTC","title":"New resources"},{"location":"common/registration/#modifying-existing-resources","text":"To modify an existing resource, follow these instructions: Find the resource that you would like to modify by searching GitHub , and open the link to the YAML file. Click the branch selector button next to the file path and select the master branch. Make changes with the GitHub file editor using the resource group template as a guide. You may leave any ID or GroupID fields blank. Make sure that the formatting and indentation of the modified entry does not change. If you are adding a new service to a host that is already registered as a resource, add the new service to the existing resource; do not create a new resource for the same host. !!! note \"\"You're editing a file in a project you don't have write access to.\"\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Updating administrative contact information for CHTC-glidein2","title":"Modifying existing resources"},{"location":"common/registration/#retiring-resources","text":"To retire an already registered resource, set Active: false . For example: ... Production: true Resources: GLOW: Active: false ... Services: CE: Description: Compute Entrypoint Details: hidden: false If the Active attribute does not already exist within the resource definition, add it. If your resource becomes available again, set Active: true .","title":"Retiring resources"},{"location":"common/registration/#registering-resource-downtimes","text":"Resource downtime is a finite period of time for which one or more of the services of a registered resource are unavailable. Warning If you expect your resource to be indefinitely unavailable, retire the resource instead of registering a downtime. Downtimes are stored in YAML files alongside the resource group YAML files as described here . For example, downtimes for resources in the CHTC-Slurm-HPC resource group of the CHTC site at the University of Wisconsin can be found and registered in the following file, relative to the root of the topology repository : topology/University of Wisconsin/CHTC/CHTC-Slurm-HPC_downtime.yaml Note Do not put downtime updates in the same pull request as other topology updates.","title":"Registering Resource Downtimes"},{"location":"common/registration/#registering-new-downtime","text":"To register a new downtime for a resource or for multiples resources that are part of a resource group, you will use webforms to generate the contents of the downtime entry, copy it into the downtime file corresponding to your resource, and submit it as a GitHub pull request. Follow the instructions below: Open one of the downtime generation webforms in your browser: Use the resource downtime generator if you only need to declare a downtime for a single resource. Use the resource group downtime generator if you need to declare a downtime for multiple resources across a resource group. Select your facility, site, resource group, and/or resource from the corresponding lists. For the single resource downtime form: Select all the services that will be down. To select multiple, use Control-Click on Windows and Linux, or Command-Click on macOS. Fill the other fields with information about the downtime. Click the Generate button. If the information is valid, a block of text will be displayed in the box labeled Generated YAML . Otherwise, check for error messages and fix your input. Follow the instructions shown below the generated block of text. \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Wait for OSG staff to approve and merge your new downtime.","title":"Registering new downtime"},{"location":"common/registration/#modifying-existing-downtime","text":"In case an already registered downtime is incorrect or need to be updated to reflect new information, you can modify existing downtime entries using the GitHub editor. Failure Changes to the ID or CreatedTime fields will be rejected. To modify an existing downtime entry for a registered resource, manually make the changes in the matching downtime YAML file. Follow the instructions below: Open the topology repository in your browser. If you do not know the facility, site, and resource group of the resource the downtime entry refers to, search the repository for the FQDN of your resource wrapped in double-quotes using the GitHub search bar (e.g., \"glidein2.chtc.wisc.edu\" ): If the search returns a single YAML file , note the name of the facility, site, and resource group and continue to the next step. If the search doesn't return any results or returns more than one YAML file , please contact us . Open the following URL in your browser using the facility, site, and resource group names to replace , , and , respectively: https://github.com/opensciencegrid/topology/edit/master/topology///_downtime.yaml \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the downtime template as a reference. Make sure that the formatting and indentation of the modified entry does not change. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Move forward end date for CHTC-glidein2 regular maintenance Wait for OSG staff to approve and merge your modified downtime.","title":"Modifying existing downtime"},{"location":"common/registration/#registering-virtual-organizations","text":"Virtual Organizations (VOs) are sets of groups or individuals defined by some common cyber-infrastructure need. This can be a scientific experiment, a university campus or a distributed research effort. A VO represents all its members and their common needs in distributed computing environment. A VO also includes the group\u2019s computing/storage resources and services. For more information about VOs, see this page . Info Before submitting a registration for a new VO, please contact us describing your organization's computing needs. VO information is stored as YAML files in the virtual-organizations directory of the topology repository . To modify a VO's information or register a new VO, follow the instructions below: Open the topology repository in your browser. If you see your VO in the list, open the file and continue to the next step. If you do not see your VO in the list, click Create new file button: In the new file dialog, enter .yaml , replacing with the name of your VO. \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the VO template as a guide. You may leave any ID fields blank. If you are modifying existing entries, make sure you do not change formatting or indentation of the modified entry. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Updating contact information for the GLOW VO","title":"Registering Virtual Organizations"},{"location":"common/registration/#registering-projects","text":"Info Before submitting a registration for a new project, please contact us describing your organization's computing needs. Project information is stored as YAML files in the projects directory of the topology repository . To modify a VO's information or register a new VO, follow the instructions below: Open the topology repository in your browser. If you see your project in the list, open the file and continue to the next step. If you do not see your project in the list, click Create new file button: In the new file dialog, enter .yaml , replacing with the name of your project. \"You're editing a file in a project you don't have write access to.\" If you see this message in the GitHub file editor, this is normal and it is because you do not have direct write access to the OSG copy of the topology data, which is why you are creating a pull request. Make changes with the GitHub file editor using the project template as a guide. You may leave any ID fields blank. If you are modifying existing entries, make sure you do not change formatting or indentation of the modified entry. Submit your changes as a pull request; select \"opensciencegrid/topology\" as the base repo. Provide a descriptive commit message, for example: Updating contact information for the Mu2e project","title":"Registering Projects"},{"location":"common/registration/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"common/yum/","text":"OSG Yum Repositories \u00b6 This document introduces Yum repositories and how they are used in the OSG. If you are unfamiliar with Yum, see the documentation on using Yum and RPM . Repositories \u00b6 The OSG hosts multiple repositories at repo.opensciencegrid.org that are intended for public use: The OSG Yum repositories... Contain RPMs that... osg , osg-upcoming are considered production-ready (default). osg-testing , osg-upcoming-testing have passed developer or integration testing but not acceptance testing osg-development , osg-upcoming-development have not passed developer, integration or acceptance testing. Do not use without instruction from the OSG Software and Release Team. osg-contrib have been contributed from outside of the OSG Software and Release Team. See this section for details. Note The upcoming repositories contain newer software that might require manual action after an update. They are not enabled by default and must be enabled in addition to the main osg repository. See the upcoming software section for details. OSG's RPM packages also rely on external packages provided by supported OSes and EPEL. You must have the following repositories available and enabled: OS repositories, including the following ones that aren't enabled by default: extras (SL 7, CentOS 7, CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) Server-Extras (RHEL 7) powertools (CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) CodeReady Builder (RHEL 8) or crb (all EL9 variants) EPEL repositories OSG repositories If any of these repositories are missing, you may end up with installation issues or missing dependencies. Danger Other repositories, such as jpackage , dag , or rpmforge , are not supported and you may encounter problems if you use them. Upcoming Software \u00b6 Certain sites have requested new versions of software that would be considered \"disruptive\" or \"experimental\": upgrading to them would likely require manual intervention after their installation. We do not want sites to unwittingly upgrade to these versions. We have placed such software in separate repositories. Their names start with osg-upcoming and have the same structure as our standard repositories, as well as the same guarantees of quality and production-readiness. There are separate sets of upcoming repositories for each release series. For example, the OSG 23 repos have corresponding 23-upcoming repos . The upcoming repositories are meant to be layered on top of our standard repositories: installing software from the upcoming repositories requires also enabling the standard repositories from the same release. Contrib Software \u00b6 In addition to our regular software repositories, we also have a contrib (short for \"contributed\") software repository. This is software that is does not go through the same software testing and release processes as the official OSG Software release, but may be useful to you. Particularly, contrib software is not guaranteed to be compatible with the rest of the OSG Software stack nor is it supported by the OSG. The definitive list of software in the contrib repository can be found here: OSG 23 EL8 contrib software repository OSG 23 EL9 contrib software repository OSG 3.6 EL7 contrib software repository OSG 3.6 EL8 contrib software repository OSG 3.6 EL9 contrib software repository If you would like to distribute your software in the OSG contrib repository, please contact us with a description of your software, what users it serves, and relevant RPM packaging. Installing Yum Repositories \u00b6 Install the Yum priorities plugin (EL7) \u00b6 The Yum priorities plugin is used to tell Yum to prefer OSG packages over EPEL or OS packages. It is important to install and enable the Yum priorities plugin before installing OSG Software to ensure that you are getting the OSG-supported versions. This plugin is built into Yum on EL8 and EL9 distributions. Install the Yum priorities package: root@host # yum install yum-plugin-priorities Ensure that /etc/yum.conf has the following line in the [main] section: plugins=1 Enable additional OS repositories \u00b6 Some packages depend on packages that are in OS repositories not enabled by default. The repositories to enable, as well as the instructions to enable them, are OS-dependent. Note A repository is enabled if it has enabled=1 in its definition, or if the enabled line is missing (i.e. it is enabled unless specified otherwise.) SL 7 \u00b6 Install the yum-conf-extras RPM package. Ensure that the sl-extras repo in /etc/yum.repos.d/sl-extras.repo is enabled. CentOS 7 \u00b6 Ensure that the extras repo in /etc/yum.repos.d/CentOS-Base.repo is enabled. CentOS Stream 8 \u00b6 Ensure that the extras repo in /etc/yum.repos.d/CentOS-Stream-Extras.repo is enabled. Ensure that the powertools repo in /etc/yum.repos.d/CentOS-Stream-PowerTools.repo is enabled. Rocky Linux 8 \u00b6 Ensure that the extras repo in /etc/yum.repos.d/Rocky-Extras.repo is enabled. Ensure that the powertools repo in /etc/yum.repos.d/Rocky-PowerTools.repo is enabled. AlmaLinux 8 \u00b6 Ensure that the extras repo in /etc/yum.repos.d/almalinux.repo is enabled. Ensure that the powertools repo in /etc/yum.repos.d/almalinux-powertools.repo is enabled. RHEL 7 \u00b6 Ensure that the Server-Extras channel is enabled. RHEL 8 \u00b6 Ensure that the CodeReady Linux Builder channel is enabled. See Red Hat's instructions on how to enable this repo. Rocky Linux 9 \u00b6 Ensure that the crb repo in /etc/yum.repos.d/rocky.repo is enabled AlmaLinux 9 \u00b6 Ensure that the crb repo in /etc/yum.repos.d/almalinux-crb.repo is enabled CentOS Stream 9 \u00b6 Ensure that the crb repo in /etc/yum.repos.d/centos.repo is enabled Install the EPEL repositories \u00b6 OSG software depends on packages distributed via the EPEL repositories. You must install and enable these first. Install the EPEL repository, if not already present. Choose the right version to match your OS version. # # EPEL 7 (For RHEL 7, CentOS 7, and SL 7) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # # EPEL 8 (For RHEL 8 and CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm # # EPEL 9 (For RHEL 9 and CentOS Stream 9, Rocky Linux 9, AlmaLinux 9) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm Verify that /etc/yum.repos.d/epel.repo exists; the [epel] section should contain: The line enabled=1 Either no priority setting, or a priority setting that is 99 or higher Warning If you have your own mirror or configuration of the EPEL repository, you MUST verify that the priority of the EPEL repository is either missing, or 99 or a higher number. The OSG repositories must have a better (numerically lower) priority than the EPEL repositories; otherwise, you might have dependency resolution (\"depsolving\") issues. Install the OSG Repositories \u00b6 This document assumes a fresh install. For instructions on upgrading from one OSG series to another, see the release series document . Install the OSG repository for your OS version and the OSG release series that you wish to use: OSG 23 EL8: root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el8-release-latest.rpm OSG 23 EL9: root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el9-release-latest.rpm OSG 3.6 EL7: root@host # yum install https://repo.opensciencegrid.org/osg/3.6/osg-3.6-el7-release-latest.rpm OSG 3.6 EL8: root@host # yum install https://repo.opensciencegrid.org/osg/3.6/osg-3.6-el8-release-latest.rpm OSG 3.6 EL9: root@host # yum install https://repo.opensciencegrid.org/osg/3.6/osg-3.6-el9-release-latest.rpm The only OSG repository enabled by default is the release one. If you want to enable another one (e.g. osg-testing ), then edit its file (e.g. /etc/yum.repos.d/osg-testing.repo ) and change the enabled option from 0 to 1: [osg-testing] name=OSG Software for Enterprise Linux 7 - Testing - $basearch #baseurl=https://repo.opensciencegrid.org/osg/3.6/el7/testing/$basearch mirrorlist=https://repo.opensciencegrid.org/mirror/osg/3.6/el7/testing/$basearch failovermethod=priority priority=98 enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-2 Optional Configuration \u00b6 Enable automatic security updates \u00b6 For production services, we suggest only changing software versions during controlled downtime. Therefore we recommend security-only automatic updates or disabling automatic updates entirely. Note Automatic updates for EL8 and EL9 variants are provided in the dnf-automatic RPM, which is not installed by default. To enable only security related automatic updates: On EL 7 variants, edit /etc/yum/yum-cron.conf and set update_cmd = security On EL8 and EL9 variants, edit /etc/dnf/automatic.conf and set upgrade_type = security CentOS 7, CentOS Stream 8, and CentOS Stream 9 do not support security-only automatic updates; doing any of the above steps will prevent automatic updates from happening at all. To disable automatic updates entirely: On EL7 variants, run: root@host # service yum-cron stop On EL8 and EL9 variants, run: root@host # systemctl disable --now dnf-automatic.timer Configuring Spacewalk priorities \u00b6 Sites using Spacewalk to manage RPM packages will need to configure OSG Yum repository priorities using their Spacewalk ID. For example, if the OSG 3.4 repository's Spacewalk ID is centos_7_osg34_dev , modify /etc/yum/pluginconf.d/90-osg.conf to include the following: [centos_7_osg_34_dev] priority = 98 Repository Mirrors \u00b6 If you run a large site (>20 nodes), you should consider setting up a local mirror for the OSG repositories. A local Yum mirror allows you to reduce the amount of external bandwidth used when updating or installing packages. Add the following to a file in /etc/cron.d : * * * * root rsync -aH rsync://repo-rsync.opensciencegrid.org/osg/ /var/www/html/osg/ Or, to mirror only a single repository: * * * * root rsync -aH rsync://repo-rsync.opensciencegrid.org/osg//el9/development /var/www/html/osg//el7 Replace with the OSG release you would like to use (e.g. 23-main ) and with a number between 0 and 59. On your worker node, you can replace the baseurl line of /etc/yum.repos.d/osg.repo with the appropriate URL for your mirror. If you are interested in having your mirror be part of the OSG's default set of mirrors, please file a support ticket . Reference \u00b6 Basic use of Yum","title":"OSG Yum Repos"},{"location":"common/yum/#osg-yum-repositories","text":"This document introduces Yum repositories and how they are used in the OSG. If you are unfamiliar with Yum, see the documentation on using Yum and RPM .","title":"OSG Yum Repositories"},{"location":"common/yum/#repositories","text":"The OSG hosts multiple repositories at repo.opensciencegrid.org that are intended for public use: The OSG Yum repositories... Contain RPMs that... osg , osg-upcoming are considered production-ready (default). osg-testing , osg-upcoming-testing have passed developer or integration testing but not acceptance testing osg-development , osg-upcoming-development have not passed developer, integration or acceptance testing. Do not use without instruction from the OSG Software and Release Team. osg-contrib have been contributed from outside of the OSG Software and Release Team. See this section for details. Note The upcoming repositories contain newer software that might require manual action after an update. They are not enabled by default and must be enabled in addition to the main osg repository. See the upcoming software section for details. OSG's RPM packages also rely on external packages provided by supported OSes and EPEL. You must have the following repositories available and enabled: OS repositories, including the following ones that aren't enabled by default: extras (SL 7, CentOS 7, CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) Server-Extras (RHEL 7) powertools (CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) CodeReady Builder (RHEL 8) or crb (all EL9 variants) EPEL repositories OSG repositories If any of these repositories are missing, you may end up with installation issues or missing dependencies. Danger Other repositories, such as jpackage , dag , or rpmforge , are not supported and you may encounter problems if you use them.","title":"Repositories"},{"location":"common/yum/#upcoming-software","text":"Certain sites have requested new versions of software that would be considered \"disruptive\" or \"experimental\": upgrading to them would likely require manual intervention after their installation. We do not want sites to unwittingly upgrade to these versions. We have placed such software in separate repositories. Their names start with osg-upcoming and have the same structure as our standard repositories, as well as the same guarantees of quality and production-readiness. There are separate sets of upcoming repositories for each release series. For example, the OSG 23 repos have corresponding 23-upcoming repos . The upcoming repositories are meant to be layered on top of our standard repositories: installing software from the upcoming repositories requires also enabling the standard repositories from the same release.","title":"Upcoming Software"},{"location":"common/yum/#contrib-software","text":"In addition to our regular software repositories, we also have a contrib (short for \"contributed\") software repository. This is software that is does not go through the same software testing and release processes as the official OSG Software release, but may be useful to you. Particularly, contrib software is not guaranteed to be compatible with the rest of the OSG Software stack nor is it supported by the OSG. The definitive list of software in the contrib repository can be found here: OSG 23 EL8 contrib software repository OSG 23 EL9 contrib software repository OSG 3.6 EL7 contrib software repository OSG 3.6 EL8 contrib software repository OSG 3.6 EL9 contrib software repository If you would like to distribute your software in the OSG contrib repository, please contact us with a description of your software, what users it serves, and relevant RPM packaging.","title":"Contrib Software"},{"location":"common/yum/#installing-yum-repositories","text":"","title":"Installing Yum Repositories"},{"location":"common/yum/#install-the-yum-priorities-plugin-el7","text":"The Yum priorities plugin is used to tell Yum to prefer OSG packages over EPEL or OS packages. It is important to install and enable the Yum priorities plugin before installing OSG Software to ensure that you are getting the OSG-supported versions. This plugin is built into Yum on EL8 and EL9 distributions. Install the Yum priorities package: root@host # yum install yum-plugin-priorities Ensure that /etc/yum.conf has the following line in the [main] section: plugins=1","title":"Install the Yum priorities plugin (EL7)"},{"location":"common/yum/#enable-additional-os-repositories","text":"Some packages depend on packages that are in OS repositories not enabled by default. The repositories to enable, as well as the instructions to enable them, are OS-dependent. Note A repository is enabled if it has enabled=1 in its definition, or if the enabled line is missing (i.e. it is enabled unless specified otherwise.)","title":"Enable additional OS repositories"},{"location":"common/yum/#sl-7","text":"Install the yum-conf-extras RPM package. Ensure that the sl-extras repo in /etc/yum.repos.d/sl-extras.repo is enabled.","title":"SL 7"},{"location":"common/yum/#centos-7","text":"Ensure that the extras repo in /etc/yum.repos.d/CentOS-Base.repo is enabled.","title":"CentOS 7"},{"location":"common/yum/#centos-stream-8","text":"Ensure that the extras repo in /etc/yum.repos.d/CentOS-Stream-Extras.repo is enabled. Ensure that the powertools repo in /etc/yum.repos.d/CentOS-Stream-PowerTools.repo is enabled.","title":"CentOS Stream 8"},{"location":"common/yum/#rocky-linux-8","text":"Ensure that the extras repo in /etc/yum.repos.d/Rocky-Extras.repo is enabled. Ensure that the powertools repo in /etc/yum.repos.d/Rocky-PowerTools.repo is enabled.","title":"Rocky Linux 8"},{"location":"common/yum/#almalinux-8","text":"Ensure that the extras repo in /etc/yum.repos.d/almalinux.repo is enabled. Ensure that the powertools repo in /etc/yum.repos.d/almalinux-powertools.repo is enabled.","title":"AlmaLinux 8"},{"location":"common/yum/#rhel-7","text":"Ensure that the Server-Extras channel is enabled.","title":"RHEL 7"},{"location":"common/yum/#rhel-8","text":"Ensure that the CodeReady Linux Builder channel is enabled. See Red Hat's instructions on how to enable this repo.","title":"RHEL 8"},{"location":"common/yum/#rocky-linux-9","text":"Ensure that the crb repo in /etc/yum.repos.d/rocky.repo is enabled","title":"Rocky Linux 9"},{"location":"common/yum/#almalinux-9","text":"Ensure that the crb repo in /etc/yum.repos.d/almalinux-crb.repo is enabled","title":"AlmaLinux 9"},{"location":"common/yum/#centos-stream-9","text":"Ensure that the crb repo in /etc/yum.repos.d/centos.repo is enabled","title":"CentOS Stream 9"},{"location":"common/yum/#install-the-epel-repositories","text":"OSG software depends on packages distributed via the EPEL repositories. You must install and enable these first. Install the EPEL repository, if not already present. Choose the right version to match your OS version. # # EPEL 7 (For RHEL 7, CentOS 7, and SL 7) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # # EPEL 8 (For RHEL 8 and CentOS Stream 8, Rocky Linux 8, AlmaLinux 8) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm # # EPEL 9 (For RHEL 9 and CentOS Stream 9, Rocky Linux 9, AlmaLinux 9) root@host # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm Verify that /etc/yum.repos.d/epel.repo exists; the [epel] section should contain: The line enabled=1 Either no priority setting, or a priority setting that is 99 or higher Warning If you have your own mirror or configuration of the EPEL repository, you MUST verify that the priority of the EPEL repository is either missing, or 99 or a higher number. The OSG repositories must have a better (numerically lower) priority than the EPEL repositories; otherwise, you might have dependency resolution (\"depsolving\") issues.","title":"Install the EPEL repositories"},{"location":"common/yum/#install-the-osg-repositories","text":"This document assumes a fresh install. For instructions on upgrading from one OSG series to another, see the release series document . Install the OSG repository for your OS version and the OSG release series that you wish to use: OSG 23 EL8: root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el8-release-latest.rpm OSG 23 EL9: root@host # yum install https://repo.opensciencegrid.org/osg/23-main/osg-23-main-el9-release-latest.rpm OSG 3.6 EL7: root@host # yum install https://repo.opensciencegrid.org/osg/3.6/osg-3.6-el7-release-latest.rpm OSG 3.6 EL8: root@host # yum install https://repo.opensciencegrid.org/osg/3.6/osg-3.6-el8-release-latest.rpm OSG 3.6 EL9: root@host # yum install https://repo.opensciencegrid.org/osg/3.6/osg-3.6-el9-release-latest.rpm The only OSG repository enabled by default is the release one. If you want to enable another one (e.g. osg-testing ), then edit its file (e.g. /etc/yum.repos.d/osg-testing.repo ) and change the enabled option from 0 to 1: [osg-testing] name=OSG Software for Enterprise Linux 7 - Testing - $basearch #baseurl=https://repo.opensciencegrid.org/osg/3.6/el7/testing/$basearch mirrorlist=https://repo.opensciencegrid.org/mirror/osg/3.6/el7/testing/$basearch failovermethod=priority priority=98 enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OSG-2","title":"Install the OSG Repositories"},{"location":"common/yum/#optional-configuration","text":"","title":"Optional Configuration"},{"location":"common/yum/#enable-automatic-security-updates","text":"For production services, we suggest only changing software versions during controlled downtime. Therefore we recommend security-only automatic updates or disabling automatic updates entirely. Note Automatic updates for EL8 and EL9 variants are provided in the dnf-automatic RPM, which is not installed by default. To enable only security related automatic updates: On EL 7 variants, edit /etc/yum/yum-cron.conf and set update_cmd = security On EL8 and EL9 variants, edit /etc/dnf/automatic.conf and set upgrade_type = security CentOS 7, CentOS Stream 8, and CentOS Stream 9 do not support security-only automatic updates; doing any of the above steps will prevent automatic updates from happening at all. To disable automatic updates entirely: On EL7 variants, run: root@host # service yum-cron stop On EL8 and EL9 variants, run: root@host # systemctl disable --now dnf-automatic.timer","title":"Enable automatic security updates"},{"location":"common/yum/#configuring-spacewalk-priorities","text":"Sites using Spacewalk to manage RPM packages will need to configure OSG Yum repository priorities using their Spacewalk ID. For example, if the OSG 3.4 repository's Spacewalk ID is centos_7_osg34_dev , modify /etc/yum/pluginconf.d/90-osg.conf to include the following: [centos_7_osg_34_dev] priority = 98","title":"Configuring Spacewalk priorities"},{"location":"common/yum/#repository-mirrors","text":"If you run a large site (>20 nodes), you should consider setting up a local mirror for the OSG repositories. A local Yum mirror allows you to reduce the amount of external bandwidth used when updating or installing packages. Add the following to a file in /etc/cron.d : * * * * root rsync -aH rsync://repo-rsync.opensciencegrid.org/osg/ /var/www/html/osg/ Or, to mirror only a single repository: * * * * root rsync -aH rsync://repo-rsync.opensciencegrid.org/osg//el9/development /var/www/html/osg//el7 Replace with the OSG release you would like to use (e.g. 23-main ) and with a number between 0 and 59. On your worker node, you can replace the baseurl line of /etc/yum.repos.d/osg.repo with the appropriate URL for your mirror. If you are interested in having your mirror be part of the OSG's default set of mirrors, please file a support ticket .","title":"Repository Mirrors"},{"location":"common/yum/#reference","text":"Basic use of Yum","title":"Reference"},{"location":"compute-element/covid-19/","text":"Supporting COVID-19 Research on the OSG \u00b6 Info The instructions in this document are deprecated, as COVID-19 jobs are no longer prioritized. There a few options available for sites with computing resources who want to support the important and urgent work of COVID-19 researchers using the OSG. As we're currently routing such projects through the OSG VO, your site can be configured to accept pilots that exclusively run OSG VO jobs relating to COVID-19 research (among other pilots you support), allowing you to prioritize these pilots and account for this usage separately from other OSG activity. To support COVID-19 work, the overall process includes the following: Make the site computing resources available through a HTCondor-CE if you have not already done so. You can install a locally-managed instance or ask OSG to host the CE on your behalf. If neither solution is viable, or you'd like to discuss the options, please send email to help@osg-htc.org and we'll work with you to arrive at the best solution. If you already provide resources through an OSG Hosted CE, skip to this section . Enable the OSG VO on your HTCondor-CE. Setup a job route specific to COVID-19 pilot jobs (documented below). The job route will allow you to prioritize these jobs using local policy in your site's cluster. (Optional) To attract more user jobs, install CVMFS and Apptainer on your site's worker nodes Send email to help@osg-htc.org requesting that your CE receive COVID-19 pilots. We will need to know the CE hostname and any special restrictions that might apply to these pilots. Setting up a COVID-19 Job Route \u00b6 By default, COVID-19 pilots will look identical to OSG pilots except they will have the attribute IsCOVID19 = true . They do not require mapping to a distinct Unix account but can be sent to a prioritized queue or accounting group. Job routes are controlled by the JOB_ROUTER_ENTRIES configuration variable in HTCondor-CE. Customizations may be placed in /etc/condor-ce/config.d/ where files are parsed in lexicographical order, e.g. JOB_ROUTER_ENTRIES specified in 50-covid-routes.conf will override JOB_ROUTER_ENTRIES in 02-local-slurm.conf . For Non-HTCondor batch systems \u00b6 To add a new route for COVID-19 pilots for non-HTCondor batch systems: Note the names of your currently enabled routes: condor_ce_job_router_info -config Add the following configuration to a file in /etc/condor-ce/config.d/ (files are parsed in lexicographical order): JOB_ROUTER_ENTRIES @=jre [ name = \"OSG_COVID19_Jobs\"; GridResource = \"batch slurm\"; TargetUniverse = 9; set_default_queue = \"covid19\"; Requirements = (TARGET.IsCOVID19 =?= true); ] $(JOB_ROUTER_ENTRIES) @jre Replacing slurm in the GridResource attribute with the appropriate value for your batch system (e.g., lsf , pbs , sge , or slurm ); and the value of set_default_queue with the name of the partition or queue of your local batch system dedicated to COVID-19 work. Ensure that COVID-19 jobs match to the new route. Choose one of the options below depending on your HTCondor version ( condor_version ): For versions of HTCondor >= 8.8.7 and < 8.9.0; or HTCondor >= 8.9.6: specify the routes considered by the job router and the order in which they're considered by adding the following configuration to a file in /etc/condor-ce/config.d/ : JOB_ROUTER_ROUTE_NAMES = OSG_COVID19_Jobs, $(JOB_ROUTER_ROUTE_NAMES) If your configuration does not already define JOB_ROUTER_ROUTE_NAMES , you need to add the name of all previous routes to it, leaving OSG_COVID19_Jobs at the start of the list. For example: JOB_ROUTER_ROUTE_NAMES = OSG_COVID19_Jobs, Local_Condor, $(JOB_ROUTER_ROUTE_NAMES) For older versions of HTCondor: add (TARGET.IsCOVID19 =!= true) to the Requirements of any existing routes. For example, the following job route: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Slurm\" GridResource = \"batch slurm\"; TargetUniverse = 9; set_default_queue = \"atlas; Requirements = (TARGET.Owner =!= \"osg\"); ] @jre Should be updated as follows: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Slurm\" GridResource = \"batch slurm\"; TargetUniverse = 9; set_default_queue = \"atlas; Requirements = (TARGET.Owner =!= \"osg\") && (TARGET.IsCOVID19 =!= true); ] @jre Reconfigure your HTCondor-CE: condor_ce_reconfig Continue onto this section to verify your configuration For HTCondor batch systems \u00b6 Similarly, at an HTCondor site, one can place these jobs into a separate accounting group by providing the set_AcctGroup and eval_set_AccountingGroup attributes in a new job route. To add a new route for COVID-19 pilots for non-HTCondor batch systems: Note the names of your currently enabled routes: condor_ce_job_router_info -config Add the following configuration to a file in /etc/condor-ce/config.d/ (files are parsed in lexicographical order): JOB_ROUTER_ENTRIES @=jre [ name = \"OSG_COVID19_Jobs\"; TargetUniverse = 5; set_AcctGroup = \"covid19\"; eval_set_AccountingGroup = strcat(AcctGroup, \".\", Owner); Requirements = (TARGET.IsCOVID19 =?= true); ] $(JOB_ROUTER_ENTRIES) @jre Replacing covid19 in set_AcctGroup with the name of the accounting group that you would like to use for COVID-19 jobs. Ensure that COVID-19 jobs match to the new route. Choose one of the options below depending on your HTCondor version ( condor_version ): For versions of HTCondor >= 8.8.7 and < 8.9.0; or HTCondor >= 8.9.6: specify the routes considered by the job router and the order in which they're considered by adding the following configuration to a file in /etc/condor-ce/config.d/ : JOB_ROUTER_ROUTE_NAMES = OSG_COVID19_Jobs, $(JOB_ROUTER_ROUTE_NAMES) For older versions of HTCondor: add (TARGET.IsCOVID19 =!= true) to the Requirements of any existing routes. For example, the following job route: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Condor\" TargetUniverse = 5; Requirements = (TARGET.Owner =!= \"osg\"); ] @jre Should be updated as follows: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Condor\" TargetUniverse = 5; Requirements = (TARGET.Owner =!= \"atlas\") && (TARGET.IsCOVID19 =!= true); ] @jre Reconfigure your HTCondor-CE: condor_ce_reconfig Continue onto this section to verify your configuration Verifying the COVID-19 Job Route \u00b6 To verify that your HTCondor-CE is configured to support COVID-19 jobs, perform the following steps: Ensure that the OSG_COVID19_Jobs route appears with all of your other previously enabled routes: condor_ce_job_router_info -config Known issue: removing old routes If your HTCondor-CE has jobs associated with a route that is removed from your configuration, this will result in a crashing Job Router. If you accidentally remove an old route, restore the route or remove all jobs associated with said route. Ensure that COVID-19 jobs will match to your new job route: For versions of HTCondor >= 8.8.7 and < 8.9.0; or HTCondor >= 8.9.6: OSG_COVID19_Jobs should be the first route in the routing table: condor_ce_config_val -verbose JOB_ROUTER_ROUTE_NAMES For older versions of HTCondor: the Requirements expresison of your OSG_COVID19_Jobs route must contain (TARGET.IsCOVID19 =?= true) and all other routes must contain (TARGET.IsCOVID19 =!= true) in their Requirements expression. After requesting COVID-19 jobs , verify that jobs are being routed appropriately, by examining pilots with condor_ce_router_q . Requesting COVID-19 Jobs \u00b6 To receive COVID-19 pilot jobs, send an email to help@osg-htc.org with the subject Requesting COVID-19 pilots and the following information: Whether you want to receive only COVID-19 jobs, or if you want to accept COVID-19 and other OSG jobs The hostname(s) of your HTCondor-CE(s) Any other restrictions that may apply to these jobs (e.g. number of available cores) Viewing COVID-19 Contributions \u00b6 You can view how many hours that COVID-19 projects have consumed at your site with this GRACC dashboard . Getting Help \u00b6 To get assistance, please use this page .","title":"Supporting COVID-19 Research on the OSG"},{"location":"compute-element/covid-19/#supporting-covid-19-research-on-the-osg","text":"Info The instructions in this document are deprecated, as COVID-19 jobs are no longer prioritized. There a few options available for sites with computing resources who want to support the important and urgent work of COVID-19 researchers using the OSG. As we're currently routing such projects through the OSG VO, your site can be configured to accept pilots that exclusively run OSG VO jobs relating to COVID-19 research (among other pilots you support), allowing you to prioritize these pilots and account for this usage separately from other OSG activity. To support COVID-19 work, the overall process includes the following: Make the site computing resources available through a HTCondor-CE if you have not already done so. You can install a locally-managed instance or ask OSG to host the CE on your behalf. If neither solution is viable, or you'd like to discuss the options, please send email to help@osg-htc.org and we'll work with you to arrive at the best solution. If you already provide resources through an OSG Hosted CE, skip to this section . Enable the OSG VO on your HTCondor-CE. Setup a job route specific to COVID-19 pilot jobs (documented below). The job route will allow you to prioritize these jobs using local policy in your site's cluster. (Optional) To attract more user jobs, install CVMFS and Apptainer on your site's worker nodes Send email to help@osg-htc.org requesting that your CE receive COVID-19 pilots. We will need to know the CE hostname and any special restrictions that might apply to these pilots.","title":"Supporting COVID-19 Research on the OSG"},{"location":"compute-element/covid-19/#setting-up-a-covid-19-job-route","text":"By default, COVID-19 pilots will look identical to OSG pilots except they will have the attribute IsCOVID19 = true . They do not require mapping to a distinct Unix account but can be sent to a prioritized queue or accounting group. Job routes are controlled by the JOB_ROUTER_ENTRIES configuration variable in HTCondor-CE. Customizations may be placed in /etc/condor-ce/config.d/ where files are parsed in lexicographical order, e.g. JOB_ROUTER_ENTRIES specified in 50-covid-routes.conf will override JOB_ROUTER_ENTRIES in 02-local-slurm.conf .","title":"Setting up a COVID-19 Job Route"},{"location":"compute-element/covid-19/#for-non-htcondor-batch-systems","text":"To add a new route for COVID-19 pilots for non-HTCondor batch systems: Note the names of your currently enabled routes: condor_ce_job_router_info -config Add the following configuration to a file in /etc/condor-ce/config.d/ (files are parsed in lexicographical order): JOB_ROUTER_ENTRIES @=jre [ name = \"OSG_COVID19_Jobs\"; GridResource = \"batch slurm\"; TargetUniverse = 9; set_default_queue = \"covid19\"; Requirements = (TARGET.IsCOVID19 =?= true); ] $(JOB_ROUTER_ENTRIES) @jre Replacing slurm in the GridResource attribute with the appropriate value for your batch system (e.g., lsf , pbs , sge , or slurm ); and the value of set_default_queue with the name of the partition or queue of your local batch system dedicated to COVID-19 work. Ensure that COVID-19 jobs match to the new route. Choose one of the options below depending on your HTCondor version ( condor_version ): For versions of HTCondor >= 8.8.7 and < 8.9.0; or HTCondor >= 8.9.6: specify the routes considered by the job router and the order in which they're considered by adding the following configuration to a file in /etc/condor-ce/config.d/ : JOB_ROUTER_ROUTE_NAMES = OSG_COVID19_Jobs, $(JOB_ROUTER_ROUTE_NAMES) If your configuration does not already define JOB_ROUTER_ROUTE_NAMES , you need to add the name of all previous routes to it, leaving OSG_COVID19_Jobs at the start of the list. For example: JOB_ROUTER_ROUTE_NAMES = OSG_COVID19_Jobs, Local_Condor, $(JOB_ROUTER_ROUTE_NAMES) For older versions of HTCondor: add (TARGET.IsCOVID19 =!= true) to the Requirements of any existing routes. For example, the following job route: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Slurm\" GridResource = \"batch slurm\"; TargetUniverse = 9; set_default_queue = \"atlas; Requirements = (TARGET.Owner =!= \"osg\"); ] @jre Should be updated as follows: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Slurm\" GridResource = \"batch slurm\"; TargetUniverse = 9; set_default_queue = \"atlas; Requirements = (TARGET.Owner =!= \"osg\") && (TARGET.IsCOVID19 =!= true); ] @jre Reconfigure your HTCondor-CE: condor_ce_reconfig Continue onto this section to verify your configuration","title":"For Non-HTCondor batch systems"},{"location":"compute-element/covid-19/#for-htcondor-batch-systems","text":"Similarly, at an HTCondor site, one can place these jobs into a separate accounting group by providing the set_AcctGroup and eval_set_AccountingGroup attributes in a new job route. To add a new route for COVID-19 pilots for non-HTCondor batch systems: Note the names of your currently enabled routes: condor_ce_job_router_info -config Add the following configuration to a file in /etc/condor-ce/config.d/ (files are parsed in lexicographical order): JOB_ROUTER_ENTRIES @=jre [ name = \"OSG_COVID19_Jobs\"; TargetUniverse = 5; set_AcctGroup = \"covid19\"; eval_set_AccountingGroup = strcat(AcctGroup, \".\", Owner); Requirements = (TARGET.IsCOVID19 =?= true); ] $(JOB_ROUTER_ENTRIES) @jre Replacing covid19 in set_AcctGroup with the name of the accounting group that you would like to use for COVID-19 jobs. Ensure that COVID-19 jobs match to the new route. Choose one of the options below depending on your HTCondor version ( condor_version ): For versions of HTCondor >= 8.8.7 and < 8.9.0; or HTCondor >= 8.9.6: specify the routes considered by the job router and the order in which they're considered by adding the following configuration to a file in /etc/condor-ce/config.d/ : JOB_ROUTER_ROUTE_NAMES = OSG_COVID19_Jobs, $(JOB_ROUTER_ROUTE_NAMES) For older versions of HTCondor: add (TARGET.IsCOVID19 =!= true) to the Requirements of any existing routes. For example, the following job route: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Condor\" TargetUniverse = 5; Requirements = (TARGET.Owner =!= \"osg\"); ] @jre Should be updated as follows: JOB_ROUTER_ENTRIES @=jre [ name = \"Local_Condor\" TargetUniverse = 5; Requirements = (TARGET.Owner =!= \"atlas\") && (TARGET.IsCOVID19 =!= true); ] @jre Reconfigure your HTCondor-CE: condor_ce_reconfig Continue onto this section to verify your configuration","title":"For HTCondor batch systems"},{"location":"compute-element/covid-19/#verifying-the-covid-19-job-route","text":"To verify that your HTCondor-CE is configured to support COVID-19 jobs, perform the following steps: Ensure that the OSG_COVID19_Jobs route appears with all of your other previously enabled routes: condor_ce_job_router_info -config Known issue: removing old routes If your HTCondor-CE has jobs associated with a route that is removed from your configuration, this will result in a crashing Job Router. If you accidentally remove an old route, restore the route or remove all jobs associated with said route. Ensure that COVID-19 jobs will match to your new job route: For versions of HTCondor >= 8.8.7 and < 8.9.0; or HTCondor >= 8.9.6: OSG_COVID19_Jobs should be the first route in the routing table: condor_ce_config_val -verbose JOB_ROUTER_ROUTE_NAMES For older versions of HTCondor: the Requirements expresison of your OSG_COVID19_Jobs route must contain (TARGET.IsCOVID19 =?= true) and all other routes must contain (TARGET.IsCOVID19 =!= true) in their Requirements expression. After requesting COVID-19 jobs , verify that jobs are being routed appropriately, by examining pilots with condor_ce_router_q .","title":"Verifying the COVID-19 Job Route"},{"location":"compute-element/covid-19/#requesting-covid-19-jobs","text":"To receive COVID-19 pilot jobs, send an email to help@osg-htc.org with the subject Requesting COVID-19 pilots and the following information: Whether you want to receive only COVID-19 jobs, or if you want to accept COVID-19 and other OSG jobs The hostname(s) of your HTCondor-CE(s) Any other restrictions that may apply to these jobs (e.g. number of available cores)","title":"Requesting COVID-19 Jobs"},{"location":"compute-element/covid-19/#viewing-covid-19-contributions","text":"You can view how many hours that COVID-19 projects have consumed at your site with this GRACC dashboard .","title":"Viewing COVID-19 Contributions"},{"location":"compute-element/covid-19/#getting-help","text":"To get assistance, please use this page .","title":"Getting Help"},{"location":"compute-element/hosted-ce/","text":"Requesting an OSG Hosted CE \u00b6 An OSG Hosted Compute Entrypoint (CE) is the entry point for resource requests coming from the OSG; it handles authorization and delegation of resource requests to your existing campus HPC/HTC cluster. Many sites set up their compute entrypoint locally. As an alternative, OSG offers a no-cost Hosted CE option wherein the OSG team will host and operate the HTCondor Compute Entrypoint, and configure it for the communities that you choose to support. This document explains the requirements and the procedure for requesting an OSG Hosted CE. Running more than 10,000 resource requests The Hosted CE can support thousands of concurrent resource request submissions. If you wish to run your own local compute entrypoint or expect to support more than 10,000 concurrently running OSG resource requests, see this page for installing the HTCondor-CE. Before Starting \u00b6 Before preparing your cluster for OSG resource requests, consider the following requirements: An existing compute cluster with a supported batch system running on a supported operating system Outbound network connectivity from the worker nodes (they can be behind NAT) One or more Unix accounts on your cluster's submit server with the following capabilities: Accessible via SSH key Use of SSH remote port forwarding ( AllowTcpForwarding yes ) and SSH multiplexing ( MaxSessions 10 or greater) Permission to submit jobs to your local cluster. Shared user home directories between the submit server and the worker nodes. Not required for HTCondor clusters: see this section for more details. Temporary scratch space on each worker node; site administrators should ensure that files in this directory are regularly cleaned out. OSG resource contributors must inform the OSG of any relevant changes to their site. Site downtimes For an improved turnaround time regarding an outage or downtime at your site, contact us and include downtime in the subject or body of the email. For additional technical details, please consult the reference section below. Don't meet the requirements? If your site does not meet these conditions, please contact us to discuss your options for contributing to the OSG. Scheduling a Planning Consultation \u00b6 Before participating in the OSG, either as a computational resource contributor or consumer, we ask that you contact us to set up a consultation. During this consultation, OSG staff will introduce you and your team to the OSG and develop a plan to meet your resource contribution and/or research goals. Preparing Your Local Cluster \u00b6 After the consultation, ensure that your local cluster meets the requirements as outlined above . In particular, you should now know which accounts to create for the communities that you wish to serve at your cluster. Also consider the size and number of jobs that the OSG should send to your site (e.g., number of cores, memory, GPUs, walltime) as well as their scheduling policy (e.g. preemptible backfill partitions). Additionally, OSG staff may have directed you to follow installation instructions from one or more of the following sections: (Recommended) Providing access to CVMFS \u00b6 Maximize resource utilization; required for GPU support Installing CVMFS on your cluster makes your resources more attractive to OSG user jobs! Additionally, if you plan to contribute GPUs to the OSG, installation of CVMFS is required . Many users in the OSG make of use software modules and/or containers provided by their collaborations or by the OSG Research Facilitation team. In order to support these users without having to install specific software modules on your cluster, you may provide a distributed software repository system called CernVM File System (CVMFS). In order to provide CVMFS at your site, you will need the following: A cluster-wide Frontier Squid proxy service with at least 50GB of cache space; installation instructions for Frontier Squid are provided here . A local CVMFS cache per worker node (10 GB minimum, 20 GB recommended) After setting up the Frontier Squid proxy and worker node local caches, install CVMFS on each worker node. (HTCondor clusters only) Installing the OSG Worker Node Client \u00b6 Skip this section if you have CVMFS or shared home directories! If you have CVMFS installed or shared home directories on your worker nodes, you can skip manual installation of the OSG Worker Node Client. All OSG sites need to provide the OSG Worker Node Client on each worker node in their local cluster. This is normally handled by OSG staff for a Hosted CE but that requires shared home directories across the cluster. However, for sites with an HTCondor batch system, often there is no shared filesystem set up. If you run an HTCondor site and it is easier to install and maintain the Worker Node Client on each worker node than to install CVMFS or maintain shared file system, you have the following options: Install the Worker Node Client from RPM Install the Worker Node Client from tarball Requesting an OSG Hosted CE \u00b6 After preparing your local cluster, apply for a Hosted CE by filling out the cluster integration questionnaire. Your answers will help our operators submit resource requests to your local cluster of the appropriate size and scale. Cluster Integration Questionnaire Can I change my answers at a later date? Yes! If you want the OSG to change the size (i.e. CPU, RAM), type (e.g., GPU requests), or number of resource requests, contact us with the FQDN of your login host and the details of your changes. Finalizing Installation \u00b6 After applying for an OSG Hosted CE, our staff will contact you with the following information: IP ranges of OSG hosted services Public SSH key to be installed in the OSG accounts Once this is done, OSG staff will work with you and your team to begin submitting resource requests to your site, first with some tests, then with a steady ramp-up to full production. Validating contributions \u00b6 In addition to any internal validation processes that you may have, the OSG provides monitoring to view which communities and projects within said communities are accessing your site, their fields of science, and home institution. Below is an example of the monitoring views that will be available for your cluster. To view your contributions, select your site from the Facility dropdown of the Payload job summary dashboard. Note that accounting data may take up to 24 hours to display. Reference \u00b6 User accounts \u00b6 Each resource pool in the OSG Consortium that uses Hosted CEs is mapped to your site as a fixed, specific account; we request the account names are of the form osg01 through osg20 . The mappings from Unix usernames to resource pools are as follows: Username Pool Supported Research osg01 OSPool Projects (primarily single PI) supported directly by the OSG organization osg02 GLOW Projects coming from the Center for High Throughput Computing at the University of Wisconsin-Madison osg03 HCC Projects coming from the Holland Computing Center at the University of Nebraska\u2013Lincoln osg04 CMS High-energy physics experiment from the Large Hadron Collider at CERN osg05 Fermilab Experiments from the Fermi National Accelerator Laboratory osg07 IGWN Gravitational wave detection experiments osg08 IGWN Gravitational wave detection experiments osg09 ATLAS High-energy physics experiment from the Large Hadron Collider at CERN osg10 GlueX Study of quark and gluon degrees of freedom in hadrons using high-energy photons osg11 DUNE Experiment for neutrino science and proton decay studies osg12 IceCube Research based on data from the IceCube neutrino detector osg13 XENON Dark matter search experiment osg14 JLab Experiments from the Thomas Jefferson National Accelerator Facility osg15 - osg20 - Unassigned For example, the activities in your batch system corresponding to the user osg02 will always be associated with the GLOW resource pool. Security \u00b6 OSG takes multiple precautions to maintain security and prevent unauthorized usage of resources: Access to the OSG system with SSH keys are restricted to the OSG staff maintaining them Users are carefully vetted before they are allowed to submit jobs to OSG Jobs running through OSG can be traced back to the user that submitted them Job submission can quickly be disabled if needed Our security team is readily contactable in case of an emergency: https://osg-htc.org/security/#reporting-a-security-incident How to Get Help \u00b6 Is your site not receiving jobs from an OSG Hosted CE? Consult our status page for Hosted CE outages. If there isn't an outage, you need help with setup, or otherwise have questions, contact us .","title":"Request a Hosted CE"},{"location":"compute-element/hosted-ce/#requesting-an-osg-hosted-ce","text":"An OSG Hosted Compute Entrypoint (CE) is the entry point for resource requests coming from the OSG; it handles authorization and delegation of resource requests to your existing campus HPC/HTC cluster. Many sites set up their compute entrypoint locally. As an alternative, OSG offers a no-cost Hosted CE option wherein the OSG team will host and operate the HTCondor Compute Entrypoint, and configure it for the communities that you choose to support. This document explains the requirements and the procedure for requesting an OSG Hosted CE. Running more than 10,000 resource requests The Hosted CE can support thousands of concurrent resource request submissions. If you wish to run your own local compute entrypoint or expect to support more than 10,000 concurrently running OSG resource requests, see this page for installing the HTCondor-CE.","title":"Requesting an OSG Hosted CE"},{"location":"compute-element/hosted-ce/#before-starting","text":"Before preparing your cluster for OSG resource requests, consider the following requirements: An existing compute cluster with a supported batch system running on a supported operating system Outbound network connectivity from the worker nodes (they can be behind NAT) One or more Unix accounts on your cluster's submit server with the following capabilities: Accessible via SSH key Use of SSH remote port forwarding ( AllowTcpForwarding yes ) and SSH multiplexing ( MaxSessions 10 or greater) Permission to submit jobs to your local cluster. Shared user home directories between the submit server and the worker nodes. Not required for HTCondor clusters: see this section for more details. Temporary scratch space on each worker node; site administrators should ensure that files in this directory are regularly cleaned out. OSG resource contributors must inform the OSG of any relevant changes to their site. Site downtimes For an improved turnaround time regarding an outage or downtime at your site, contact us and include downtime in the subject or body of the email. For additional technical details, please consult the reference section below. Don't meet the requirements? If your site does not meet these conditions, please contact us to discuss your options for contributing to the OSG.","title":"Before Starting"},{"location":"compute-element/hosted-ce/#scheduling-a-planning-consultation","text":"Before participating in the OSG, either as a computational resource contributor or consumer, we ask that you contact us to set up a consultation. During this consultation, OSG staff will introduce you and your team to the OSG and develop a plan to meet your resource contribution and/or research goals.","title":"Scheduling a Planning Consultation"},{"location":"compute-element/hosted-ce/#preparing-your-local-cluster","text":"After the consultation, ensure that your local cluster meets the requirements as outlined above . In particular, you should now know which accounts to create for the communities that you wish to serve at your cluster. Also consider the size and number of jobs that the OSG should send to your site (e.g., number of cores, memory, GPUs, walltime) as well as their scheduling policy (e.g. preemptible backfill partitions). Additionally, OSG staff may have directed you to follow installation instructions from one or more of the following sections:","title":"Preparing Your Local Cluster"},{"location":"compute-element/hosted-ce/#recommended-providing-access-to-cvmfs","text":"Maximize resource utilization; required for GPU support Installing CVMFS on your cluster makes your resources more attractive to OSG user jobs! Additionally, if you plan to contribute GPUs to the OSG, installation of CVMFS is required . Many users in the OSG make of use software modules and/or containers provided by their collaborations or by the OSG Research Facilitation team. In order to support these users without having to install specific software modules on your cluster, you may provide a distributed software repository system called CernVM File System (CVMFS). In order to provide CVMFS at your site, you will need the following: A cluster-wide Frontier Squid proxy service with at least 50GB of cache space; installation instructions for Frontier Squid are provided here . A local CVMFS cache per worker node (10 GB minimum, 20 GB recommended) After setting up the Frontier Squid proxy and worker node local caches, install CVMFS on each worker node.","title":"(Recommended) Providing access to CVMFS"},{"location":"compute-element/hosted-ce/#htcondor-clusters-only-installing-the-osg-worker-node-client","text":"Skip this section if you have CVMFS or shared home directories! If you have CVMFS installed or shared home directories on your worker nodes, you can skip manual installation of the OSG Worker Node Client. All OSG sites need to provide the OSG Worker Node Client on each worker node in their local cluster. This is normally handled by OSG staff for a Hosted CE but that requires shared home directories across the cluster. However, for sites with an HTCondor batch system, often there is no shared filesystem set up. If you run an HTCondor site and it is easier to install and maintain the Worker Node Client on each worker node than to install CVMFS or maintain shared file system, you have the following options: Install the Worker Node Client from RPM Install the Worker Node Client from tarball","title":"(HTCondor clusters only) Installing the OSG Worker Node Client"},{"location":"compute-element/hosted-ce/#requesting-an-osg-hosted-ce_1","text":"After preparing your local cluster, apply for a Hosted CE by filling out the cluster integration questionnaire. Your answers will help our operators submit resource requests to your local cluster of the appropriate size and scale. Cluster Integration Questionnaire Can I change my answers at a later date? Yes! If you want the OSG to change the size (i.e. CPU, RAM), type (e.g., GPU requests), or number of resource requests, contact us with the FQDN of your login host and the details of your changes.","title":"Requesting an OSG Hosted CE"},{"location":"compute-element/hosted-ce/#finalizing-installation","text":"After applying for an OSG Hosted CE, our staff will contact you with the following information: IP ranges of OSG hosted services Public SSH key to be installed in the OSG accounts Once this is done, OSG staff will work with you and your team to begin submitting resource requests to your site, first with some tests, then with a steady ramp-up to full production.","title":"Finalizing Installation"},{"location":"compute-element/hosted-ce/#validating-contributions","text":"In addition to any internal validation processes that you may have, the OSG provides monitoring to view which communities and projects within said communities are accessing your site, their fields of science, and home institution. Below is an example of the monitoring views that will be available for your cluster. To view your contributions, select your site from the Facility dropdown of the Payload job summary dashboard. Note that accounting data may take up to 24 hours to display.","title":"Validating contributions"},{"location":"compute-element/hosted-ce/#reference","text":"","title":"Reference"},{"location":"compute-element/hosted-ce/#user-accounts","text":"Each resource pool in the OSG Consortium that uses Hosted CEs is mapped to your site as a fixed, specific account; we request the account names are of the form osg01 through osg20 . The mappings from Unix usernames to resource pools are as follows: Username Pool Supported Research osg01 OSPool Projects (primarily single PI) supported directly by the OSG organization osg02 GLOW Projects coming from the Center for High Throughput Computing at the University of Wisconsin-Madison osg03 HCC Projects coming from the Holland Computing Center at the University of Nebraska\u2013Lincoln osg04 CMS High-energy physics experiment from the Large Hadron Collider at CERN osg05 Fermilab Experiments from the Fermi National Accelerator Laboratory osg07 IGWN Gravitational wave detection experiments osg08 IGWN Gravitational wave detection experiments osg09 ATLAS High-energy physics experiment from the Large Hadron Collider at CERN osg10 GlueX Study of quark and gluon degrees of freedom in hadrons using high-energy photons osg11 DUNE Experiment for neutrino science and proton decay studies osg12 IceCube Research based on data from the IceCube neutrino detector osg13 XENON Dark matter search experiment osg14 JLab Experiments from the Thomas Jefferson National Accelerator Facility osg15 - osg20 - Unassigned For example, the activities in your batch system corresponding to the user osg02 will always be associated with the GLOW resource pool.","title":"User accounts"},{"location":"compute-element/hosted-ce/#security","text":"OSG takes multiple precautions to maintain security and prevent unauthorized usage of resources: Access to the OSG system with SSH keys are restricted to the OSG staff maintaining them Users are carefully vetted before they are allowed to submit jobs to OSG Jobs running through OSG can be traced back to the user that submitted them Job submission can quickly be disabled if needed Our security team is readily contactable in case of an emergency: https://osg-htc.org/security/#reporting-a-security-incident","title":"Security"},{"location":"compute-element/hosted-ce/#how-to-get-help","text":"Is your site not receiving jobs from an OSG Hosted CE? Consult our status page for Hosted CE outages. If there isn't an outage, you need help with setup, or otherwise have questions, contact us .","title":"How to Get Help"},{"location":"compute-element/htcondor-ce-overview/","text":"HTCondor-CE Overview \u00b6 This document serves as an introduction to HTCondor-CE and how it works. Before continuing with the overview, make sure that you are familiar with the following concepts: An OSG site plan What is a batch system and which one will you use ( HTCondor , PBS, LSF, SGE, or SLURM )? Security via host certificates to authenticate servers and bearer tokens to authenticate clients Pilot jobs, frontends, and factories (i.e., GlideinWMS , AutoPyFactory) What is a Compute Entrypoint? \u00b6 An OSG Compute Entrypoint (CE) is the door for remote organizations to submit requests to temporarily allocate local compute resources. At the heart of the CE is the job gateway software, which is responsible for handling incoming jobs, authenticating and authorizing them, and delegating them to your batch system for execution. Most jobs that arrive at a CE (here referred to as \"CE jobs\") are not end-user jobs, but rather pilot jobs submitted from factories. Successful pilot jobs create and make available an environment for actual end-user jobs to match and ultimately run within the pilot job container. Eventually pilot jobs remove themselves, typically after a period of inactivity. Note The Compute Entrypoint was previously known as the \"Compute Element\". What is HTCondor-CE? \u00b6 HTCondor-CE is a special configuration of the HTCondor software designed to be a job gateway solution for the OSG Fabric of Services. It is configured to use the JobRouter daemon to delegate jobs by transforming and submitting them to the site\u2019s batch system. Benefits of running the HTCondor-CE: Scalability: HTCondor-CE is capable of supporting job workloads of large sites Debugging tools: HTCondor-CE offers many tools to help troubleshoot issues with jobs Routing as configuration: HTCondor-CE\u2019s mechanism to transform and submit jobs is customized via configuration variables, which means that customizations will persist across upgrades and will not involve modification of software internals to route jobs How CE Jobs Run \u00b6 Once an incoming CE job is authorized, it is placed into HTCondor-CE\u2019s scheduler where the JobRouter creates a transformed copy (called the routed job ) and submits the copy to the batch system (called the batch system job ). After submission, HTCondor-CE monitors the batch system job and communicates its status to the original CE job, which in turn notifies the original submitter (e.g., job factory) of any updates. When the job completes, files are transferred along the same chain: from the batch system to the CE, then from the CE to the original submitter. Hosted CE over SSH \u00b6 The Hosted CE is intended for small sites or as an introduction to providing capacity to collaborations. OSG staff configure and maintain an HTCondor-CE on behalf of the site. The Hosted CE is a special configuration of HTCondor-CE that can submit jobs to a remote cluster over SSH. It provides a simple starting point for opportunistic resource owners that want to start contributing capacity with minimal effort: an organization will be able to accept CE jobs by allowing SSH access to a login node in their cluster. If your site intends to run over 10,000 concurrent CE jobs, you will need to host your own HTCondor-CE because the Hosted CE has not yet been optimized for such loads. If you are interested in a Hosted CE solution, please follow the instructions on this page . On HTCondor batch systems \u00b6 For a site with an HTCondor batch system , the JobRouter can use HTCondor protocols to place a transformed copy of the CE job directly into the batch system\u2019s scheduler, meaning that the routed and batch system jobs are one and the same. Thus, there are three representations of your job, each with its own ID (see diagram below): Access point: the HTCondor job ID in the original queue HTCondor-CE: the incoming CE job\u2019s ID HTCondor batch system: the routed job\u2019s ID In an HTCondor-CE/HTCondor setup, files are transferred from HTCondor-CE\u2019s spool directory to the batch system\u2019s spool directory using internal HTCondor protocols. Note The JobRouter copies the job directly into the batch system and does not make use of condor_submit . This means that if the HTCondor batch system is configured to add attributes to incoming jobs when they are submitted (i.e., SUBMIT_EXPRS ), these attributes will not be added to the routed jobs. On other batch systems \u00b6 For non-HTCondor batch systems, the JobRouter transforms the CE job into a routed job on the CE and the routed job submits a job into the batch system via a process called the BLAHP. Thus, there are four representations of your job, each with its own ID (see diagram below): Login node: the HTCondor job ID in the original queue HTCondor-CE: the incoming CE job\u2019s ID and the routed job\u2019s ID HTCondor batch system: the batch system\u2019s job ID Although the following figure specifies the PBS case, it applies to all non-HTCondor batch systems: With non-HTCondor batch systems, HTCondor-CE cannot use internal HTCondor protocols to transfer files so its spool directory must be exported to a shared file system that is mounted on the batch system\u2019s worker nodes. How the CE is Customized \u00b6 Aside from the basic configuration required in the CE installation, there are two main ways to customize your CE (if you decide any customization is required at all): Deciding which collaborations are allowed to run at your site: collaborations will submit resource allocation requests to your CE using bearer tokens, and you can configure which collaboration's tokens you are willing to accept. How to filter and transform the CE jobs to be run on your batch system: Filtering and transforming CE jobs (i.e., setting site-specific attributes or resource limits), requires configuration of your site\u2019s job routes. For examples of common job routes, consult the JobRouter recipes page. Note If you are running HTCondor as your batch system, you will have two HTCondor configurations side-by-side (one residing in /etc/condor/ and the other in /etc/condor-ce ) and will need to make sure to differentiate the two when editing any configuration. How Security Works \u00b6 Among OSG services, communication is secured between various parties using a combination of PKI infrastructure involving Certificate Authorities (CAs) and bearer tokens. Services such as a Compute Entrypoint, present host certificates to prove their identity to clients, much like your browser verifies websites that you may visit. And to use these services, clients present bearer tokens declaring their association with a given collaboration and what permissions the collaboration has given the client. In turn, the service may be configured to authorize the client based on their collaboration. Next steps \u00b6 Once the basic installation is done, additional activities include: Setting up job routes to customize incoming jobs Submitting jobs to a HTCondor-CE Troubleshooting the HTCondor-CE Register the CE Register with the OSG GlideinWMS factories and/or the ATLAS AutoPyFactory","title":"HTCondor-CE Overview"},{"location":"compute-element/htcondor-ce-overview/#htcondor-ce-overview","text":"This document serves as an introduction to HTCondor-CE and how it works. Before continuing with the overview, make sure that you are familiar with the following concepts: An OSG site plan What is a batch system and which one will you use ( HTCondor , PBS, LSF, SGE, or SLURM )? Security via host certificates to authenticate servers and bearer tokens to authenticate clients Pilot jobs, frontends, and factories (i.e., GlideinWMS , AutoPyFactory)","title":"HTCondor-CE Overview"},{"location":"compute-element/htcondor-ce-overview/#what-is-a-compute-entrypoint","text":"An OSG Compute Entrypoint (CE) is the door for remote organizations to submit requests to temporarily allocate local compute resources. At the heart of the CE is the job gateway software, which is responsible for handling incoming jobs, authenticating and authorizing them, and delegating them to your batch system for execution. Most jobs that arrive at a CE (here referred to as \"CE jobs\") are not end-user jobs, but rather pilot jobs submitted from factories. Successful pilot jobs create and make available an environment for actual end-user jobs to match and ultimately run within the pilot job container. Eventually pilot jobs remove themselves, typically after a period of inactivity. Note The Compute Entrypoint was previously known as the \"Compute Element\".","title":"What is a Compute Entrypoint?"},{"location":"compute-element/htcondor-ce-overview/#what-is-htcondor-ce","text":"HTCondor-CE is a special configuration of the HTCondor software designed to be a job gateway solution for the OSG Fabric of Services. It is configured to use the JobRouter daemon to delegate jobs by transforming and submitting them to the site\u2019s batch system. Benefits of running the HTCondor-CE: Scalability: HTCondor-CE is capable of supporting job workloads of large sites Debugging tools: HTCondor-CE offers many tools to help troubleshoot issues with jobs Routing as configuration: HTCondor-CE\u2019s mechanism to transform and submit jobs is customized via configuration variables, which means that customizations will persist across upgrades and will not involve modification of software internals to route jobs","title":"What is HTCondor-CE?"},{"location":"compute-element/htcondor-ce-overview/#how-ce-jobs-run","text":"Once an incoming CE job is authorized, it is placed into HTCondor-CE\u2019s scheduler where the JobRouter creates a transformed copy (called the routed job ) and submits the copy to the batch system (called the batch system job ). After submission, HTCondor-CE monitors the batch system job and communicates its status to the original CE job, which in turn notifies the original submitter (e.g., job factory) of any updates. When the job completes, files are transferred along the same chain: from the batch system to the CE, then from the CE to the original submitter.","title":"How CE Jobs Run"},{"location":"compute-element/htcondor-ce-overview/#hosted-ce-over-ssh","text":"The Hosted CE is intended for small sites or as an introduction to providing capacity to collaborations. OSG staff configure and maintain an HTCondor-CE on behalf of the site. The Hosted CE is a special configuration of HTCondor-CE that can submit jobs to a remote cluster over SSH. It provides a simple starting point for opportunistic resource owners that want to start contributing capacity with minimal effort: an organization will be able to accept CE jobs by allowing SSH access to a login node in their cluster. If your site intends to run over 10,000 concurrent CE jobs, you will need to host your own HTCondor-CE because the Hosted CE has not yet been optimized for such loads. If you are interested in a Hosted CE solution, please follow the instructions on this page .","title":"Hosted CE over SSH"},{"location":"compute-element/htcondor-ce-overview/#on-htcondor-batch-systems","text":"For a site with an HTCondor batch system , the JobRouter can use HTCondor protocols to place a transformed copy of the CE job directly into the batch system\u2019s scheduler, meaning that the routed and batch system jobs are one and the same. Thus, there are three representations of your job, each with its own ID (see diagram below): Access point: the HTCondor job ID in the original queue HTCondor-CE: the incoming CE job\u2019s ID HTCondor batch system: the routed job\u2019s ID In an HTCondor-CE/HTCondor setup, files are transferred from HTCondor-CE\u2019s spool directory to the batch system\u2019s spool directory using internal HTCondor protocols. Note The JobRouter copies the job directly into the batch system and does not make use of condor_submit . This means that if the HTCondor batch system is configured to add attributes to incoming jobs when they are submitted (i.e., SUBMIT_EXPRS ), these attributes will not be added to the routed jobs.","title":"On HTCondor batch systems"},{"location":"compute-element/htcondor-ce-overview/#on-other-batch-systems","text":"For non-HTCondor batch systems, the JobRouter transforms the CE job into a routed job on the CE and the routed job submits a job into the batch system via a process called the BLAHP. Thus, there are four representations of your job, each with its own ID (see diagram below): Login node: the HTCondor job ID in the original queue HTCondor-CE: the incoming CE job\u2019s ID and the routed job\u2019s ID HTCondor batch system: the batch system\u2019s job ID Although the following figure specifies the PBS case, it applies to all non-HTCondor batch systems: With non-HTCondor batch systems, HTCondor-CE cannot use internal HTCondor protocols to transfer files so its spool directory must be exported to a shared file system that is mounted on the batch system\u2019s worker nodes.","title":"On other batch systems"},{"location":"compute-element/htcondor-ce-overview/#how-the-ce-is-customized","text":"Aside from the basic configuration required in the CE installation, there are two main ways to customize your CE (if you decide any customization is required at all): Deciding which collaborations are allowed to run at your site: collaborations will submit resource allocation requests to your CE using bearer tokens, and you can configure which collaboration's tokens you are willing to accept. How to filter and transform the CE jobs to be run on your batch system: Filtering and transforming CE jobs (i.e., setting site-specific attributes or resource limits), requires configuration of your site\u2019s job routes. For examples of common job routes, consult the JobRouter recipes page. Note If you are running HTCondor as your batch system, you will have two HTCondor configurations side-by-side (one residing in /etc/condor/ and the other in /etc/condor-ce ) and will need to make sure to differentiate the two when editing any configuration.","title":"How the CE is Customized"},{"location":"compute-element/htcondor-ce-overview/#how-security-works","text":"Among OSG services, communication is secured between various parties using a combination of PKI infrastructure involving Certificate Authorities (CAs) and bearer tokens. Services such as a Compute Entrypoint, present host certificates to prove their identity to clients, much like your browser verifies websites that you may visit. And to use these services, clients present bearer tokens declaring their association with a given collaboration and what permissions the collaboration has given the client. In turn, the service may be configured to authorize the client based on their collaboration.","title":"How Security Works"},{"location":"compute-element/htcondor-ce-overview/#next-steps","text":"Once the basic installation is done, additional activities include: Setting up job routes to customize incoming jobs Submitting jobs to a HTCondor-CE Troubleshooting the HTCondor-CE Register the CE Register with the OSG GlideinWMS factories and/or the ATLAS AutoPyFactory","title":"Next steps"},{"location":"compute-element/install-htcondor-ce/","text":"Installing and Maintaining HTCondor-CE \u00b6 The HTCondor-CE software is a job gateway for an OSG Compute Entrypoint (CE). As such, the OSG will submit resource allocation requests (RARs) jobs to your HTCondor-CE and it will handle authorization and delegation of RARs to your local batch system. In OSG today, RARs are sent to CEs as pilot jobs from a factory, which in turn are able to accept and run end-user jobs. See the upstream documentation for a more detailed introduction. Use this page to learn how to install, configure, run, test, and troubleshoot an OSG HTCondor-CE. OSG Hosted CE Unless you plan on running more than 10k concurrently running RARs or plan on making frequent configuration changes, we suggest requesting an OSG Hosted CE . Note If you are installing an HTCondor-CE for use outside of the OSG, consult the upstream documentation instead. Before Starting \u00b6 Before starting the installation process, consider the following points, consulting the upstream references as needed ( HTCondor-CE 23 ): User IDs: If they do not exist already, the installation will create the Linux users condor (UID 4716) and gratia You will also need to create Unix accounts for each collaboration that you wish to support. See details in the 'Configuring authentication' section below . SSL certificate: The HTCondor-CE service uses a host certificate and an accompanying key. If using a Let's Encrypt cert, install these as /etc/pki/tls/certs/localhost.crt and /etc/pki/tls/private/localhost.key If using an IGTF cert, install these as /etc/grid-security/hostcert.pem and /etc/grid-security/hostkey.pem See details in the Host Certificates overview . DNS entries: Forward and reverse DNS must resolve for the HTCondor-CE host Network ports: The pilot factories must be able to contact your HTCondor-CE service on port 9619 (TCP) Access point/login node: HTCondor-CE should be installed on a host that already has the ability to submit jobs into your local cluster File Systems : Non-HTCondor batch systems require a shared file system between the HTCondor-CE host and the batch system worker nodes. As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Install the appropriate EPEL and OSG Yum repositories for your operating system. Obtain root access to the host Install CA certificates Installing HTCondor-CE \u00b6 An HTCondor-CE installation consists of the job gateway (i.e., the HTCondor-CE job router) and other support software (e.g., osg-configure , a Gratia probe for OSG accounting). To simplify installation, OSG provides convenience RPMs that install all required software. Clean yum cache: root@host # yum clean all --enablerepo = * Update software: root@host # yum update This command will update all packages (Optional) If your batch system is already installed via non-RPM means and is in the following list, install the appropriate 'empty' RPM. Otherwise, skip to the next step. If your batch system is\u2026 Then run the following command\u2026 HTCondor yum install empty-condor --enablerepo=osg-empty SLURM yum install empty-slurm --enablerepo=osg-empty (Optional) If your HTCondor batch system is already installed via non-OSG RPM means, add the line below to /etc/yum.repos.d/osg.repo . Otherwise, skip to the next step. exclude=condor Select the appropriate convenience RPM: If your batch system is... Then use the following package... HTCondor osg-ce-condor LSF osg-ce-lsf PBS osg-ce-pbs SGE osg-ce-sge SLURM osg-ce-slurm Install the CE software where is the package you selected in the above step.: root@host # yum install Configuring HTCondor-CE \u00b6 There are a few required configuration steps to connect HTCondor-CE with your batch system and authentication method. For more advanced configuration, see the section on optional configurations . Configuring the local batch system \u00b6 To configure HTCondor-CE to integrate with your local batch system, please refer to the upstream documentation . Configuring authentication \u00b6 HTCondor-CE clients will submit RARs accompanied by bearer tokens declaring their association with a given collaboration and what permissions the collaboration has given the client The osg-scitokens-mapfile , pulled in by the osg-ce package, provides default token to local user mappings. To accept RARs from a particular collaboration: Create the Unix account(s) corresponding to the last field in the default mapfile: /usr/share/condor-ce/mapfiles.d/osg-scitokens-mapfile.conf . For example, to add support for the OSPool, create the osg user account on the CE and across your cluster. (Optional) if you wish to change the user mapping, copy the relevant mapping from /usr/share/condor-ce/mapfiles.d/osg-scitokens-mapfile.conf to a .conf file in /etc/condor-ce/mapfiles.d/ and change the last field to the desired username. For example, if you wish to add support for the OSPool but prefer to map OSPool pilot jobs to the osgpilot account that you created on your CE and across your cluster, you could add the following to /etc/condor-ce/mapfiles.d/50-ospool.conf : # OSG SCITOKENS /^https\\:\\/\\/scitokens\\.org\\/osg\\-connect,/ osgpilot For more details of the mapfile format, consult the \"SciTokens\" section of the upstream documentation . Bannning a collaboration \u00b6 Implicit banning Note that if you have not created the mapped user per the above section , it is not strictly necessary to add a ban mapping. HTCondor-CE will only authenticate remote RAR submission for the relevant credential if the Unix user exists. To explicitly ban a remote submitter from your HTCondor-CE, add a line like the following to a file in /etc/condor-ce/mapfiles.d/*.conf : SCITOKENS /,/ @banned.htcondor.org Replacing with a regular expression and with an arbitrary user name. For example, to ban OSPool pilots from your site, you could add the following to /etc/condor-ce/config.d/99-bans.conf : SCITOKENS /^https\\:\\/\\/scitokens\\.org\\/osg\\-connect,/ osgpilot@banned.htcondor.org Automatic configuration \u00b6 The OSG CE metapackage brings along a configuration tool, osg-configure , that is designed to automatically configure the different pieces of software required for an OSG HTCondor-CE: Enable your batch system in the HTCondor-CE configuration by editing the enabled field in the /etc/osg/config.d/20-.ini : enabled = True Read through the other .ini files in the /etc/osg/config.d directory and make any necessary changes. See the osg-configure documentation for details. Validate the configuration settings root@host # osg-configure -v Fix any errors (at least) that osg-configure reports. Once the validation command succeeds without errors, apply the configuration settings: root@host # osg-configure -c Optional configuration \u00b6 In addition to the configurations above, you may need to further configure how pilot jobs are filtered and transformed before they are submitted to your local batch system or otherwise change the behavior of your CE. For detailed instructions, please refer to the upstream documentation: Configuring the Job Router Optional configuration Accounting with multiple CEs or local user jobs \u00b6 Note For non-HTCondor batch systems only If your site has multiple CEs or you have local users submitting to the same local batch system, the OSG accounting software needs to be configured so that it doesn't over report the number of jobs. Modify the value of SuppressNoDNRecords in /etc/gratia/htcondor-ce/ProbeConfig on each of your CE's so that it reads: SuppressNoDNRecords=\"1\" Starting and Validating HTCondor-CE \u00b6 For information on how to start and validate the core HTCondor-CE services, please refer to the upstream documentation Troubleshooting HTCondor-CE \u00b6 For information on how to troubleshoot your HTCondor-CE, please refer to the upstream documentation: Common issues Debugging tools Helpful logs Registering the CE \u00b6 To contribute capacity, your CE must be registered with the OSG Consortium . To register your resource: Identify the facility, site, and resource group where your HTCondor-CE is hosted. For example, the Center for High Throughput Computing at the University of Wisconsin-Madison uses the following information: Facility: University of Wisconsin Site: CHTC Resource Group: CHTC Using the above information, create or update the appropriate YAML file, using this template as a guide. Getting Help \u00b6 To get assistance, please use the this page .","title":"Install HTCondor-CE"},{"location":"compute-element/install-htcondor-ce/#installing-and-maintaining-htcondor-ce","text":"The HTCondor-CE software is a job gateway for an OSG Compute Entrypoint (CE). As such, the OSG will submit resource allocation requests (RARs) jobs to your HTCondor-CE and it will handle authorization and delegation of RARs to your local batch system. In OSG today, RARs are sent to CEs as pilot jobs from a factory, which in turn are able to accept and run end-user jobs. See the upstream documentation for a more detailed introduction. Use this page to learn how to install, configure, run, test, and troubleshoot an OSG HTCondor-CE. OSG Hosted CE Unless you plan on running more than 10k concurrently running RARs or plan on making frequent configuration changes, we suggest requesting an OSG Hosted CE . Note If you are installing an HTCondor-CE for use outside of the OSG, consult the upstream documentation instead.","title":"Installing and Maintaining HTCondor-CE"},{"location":"compute-element/install-htcondor-ce/#before-starting","text":"Before starting the installation process, consider the following points, consulting the upstream references as needed ( HTCondor-CE 23 ): User IDs: If they do not exist already, the installation will create the Linux users condor (UID 4716) and gratia You will also need to create Unix accounts for each collaboration that you wish to support. See details in the 'Configuring authentication' section below . SSL certificate: The HTCondor-CE service uses a host certificate and an accompanying key. If using a Let's Encrypt cert, install these as /etc/pki/tls/certs/localhost.crt and /etc/pki/tls/private/localhost.key If using an IGTF cert, install these as /etc/grid-security/hostcert.pem and /etc/grid-security/hostkey.pem See details in the Host Certificates overview . DNS entries: Forward and reverse DNS must resolve for the HTCondor-CE host Network ports: The pilot factories must be able to contact your HTCondor-CE service on port 9619 (TCP) Access point/login node: HTCondor-CE should be installed on a host that already has the ability to submit jobs into your local cluster File Systems : Non-HTCondor batch systems require a shared file system between the HTCondor-CE host and the batch system worker nodes. As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Install the appropriate EPEL and OSG Yum repositories for your operating system. Obtain root access to the host Install CA certificates","title":"Before Starting"},{"location":"compute-element/install-htcondor-ce/#installing-htcondor-ce","text":"An HTCondor-CE installation consists of the job gateway (i.e., the HTCondor-CE job router) and other support software (e.g., osg-configure , a Gratia probe for OSG accounting). To simplify installation, OSG provides convenience RPMs that install all required software. Clean yum cache: root@host # yum clean all --enablerepo = * Update software: root@host # yum update This command will update all packages (Optional) If your batch system is already installed via non-RPM means and is in the following list, install the appropriate 'empty' RPM. Otherwise, skip to the next step. If your batch system is\u2026 Then run the following command\u2026 HTCondor yum install empty-condor --enablerepo=osg-empty SLURM yum install empty-slurm --enablerepo=osg-empty (Optional) If your HTCondor batch system is already installed via non-OSG RPM means, add the line below to /etc/yum.repos.d/osg.repo . Otherwise, skip to the next step. exclude=condor Select the appropriate convenience RPM: If your batch system is... Then use the following package... HTCondor osg-ce-condor LSF osg-ce-lsf PBS osg-ce-pbs SGE osg-ce-sge SLURM osg-ce-slurm Install the CE software where is the package you selected in the above step.: root@host # yum install ","title":"Installing HTCondor-CE"},{"location":"compute-element/install-htcondor-ce/#configuring-htcondor-ce","text":"There are a few required configuration steps to connect HTCondor-CE with your batch system and authentication method. For more advanced configuration, see the section on optional configurations .","title":"Configuring HTCondor-CE"},{"location":"compute-element/install-htcondor-ce/#configuring-the-local-batch-system","text":"To configure HTCondor-CE to integrate with your local batch system, please refer to the upstream documentation .","title":"Configuring the local batch system"},{"location":"compute-element/install-htcondor-ce/#configuring-authentication","text":"HTCondor-CE clients will submit RARs accompanied by bearer tokens declaring their association with a given collaboration and what permissions the collaboration has given the client The osg-scitokens-mapfile , pulled in by the osg-ce package, provides default token to local user mappings. To accept RARs from a particular collaboration: Create the Unix account(s) corresponding to the last field in the default mapfile: /usr/share/condor-ce/mapfiles.d/osg-scitokens-mapfile.conf . For example, to add support for the OSPool, create the osg user account on the CE and across your cluster. (Optional) if you wish to change the user mapping, copy the relevant mapping from /usr/share/condor-ce/mapfiles.d/osg-scitokens-mapfile.conf to a .conf file in /etc/condor-ce/mapfiles.d/ and change the last field to the desired username. For example, if you wish to add support for the OSPool but prefer to map OSPool pilot jobs to the osgpilot account that you created on your CE and across your cluster, you could add the following to /etc/condor-ce/mapfiles.d/50-ospool.conf : # OSG SCITOKENS /^https\\:\\/\\/scitokens\\.org\\/osg\\-connect,/ osgpilot For more details of the mapfile format, consult the \"SciTokens\" section of the upstream documentation .","title":"Configuring authentication"},{"location":"compute-element/install-htcondor-ce/#bannning-a-collaboration","text":"Implicit banning Note that if you have not created the mapped user per the above section , it is not strictly necessary to add a ban mapping. HTCondor-CE will only authenticate remote RAR submission for the relevant credential if the Unix user exists. To explicitly ban a remote submitter from your HTCondor-CE, add a line like the following to a file in /etc/condor-ce/mapfiles.d/*.conf : SCITOKENS /,/ @banned.htcondor.org Replacing with a regular expression and with an arbitrary user name. For example, to ban OSPool pilots from your site, you could add the following to /etc/condor-ce/config.d/99-bans.conf : SCITOKENS /^https\\:\\/\\/scitokens\\.org\\/osg\\-connect,/ osgpilot@banned.htcondor.org","title":"Bannning a collaboration"},{"location":"compute-element/install-htcondor-ce/#automatic-configuration","text":"The OSG CE metapackage brings along a configuration tool, osg-configure , that is designed to automatically configure the different pieces of software required for an OSG HTCondor-CE: Enable your batch system in the HTCondor-CE configuration by editing the enabled field in the /etc/osg/config.d/20-.ini : enabled = True Read through the other .ini files in the /etc/osg/config.d directory and make any necessary changes. See the osg-configure documentation for details. Validate the configuration settings root@host # osg-configure -v Fix any errors (at least) that osg-configure reports. Once the validation command succeeds without errors, apply the configuration settings: root@host # osg-configure -c","title":"Automatic configuration"},{"location":"compute-element/install-htcondor-ce/#optional-configuration","text":"In addition to the configurations above, you may need to further configure how pilot jobs are filtered and transformed before they are submitted to your local batch system or otherwise change the behavior of your CE. For detailed instructions, please refer to the upstream documentation: Configuring the Job Router Optional configuration","title":"Optional configuration"},{"location":"compute-element/install-htcondor-ce/#accounting-with-multiple-ces-or-local-user-jobs","text":"Note For non-HTCondor batch systems only If your site has multiple CEs or you have local users submitting to the same local batch system, the OSG accounting software needs to be configured so that it doesn't over report the number of jobs. Modify the value of SuppressNoDNRecords in /etc/gratia/htcondor-ce/ProbeConfig on each of your CE's so that it reads: SuppressNoDNRecords=\"1\"","title":"Accounting with multiple CEs or local user jobs"},{"location":"compute-element/install-htcondor-ce/#starting-and-validating-htcondor-ce","text":"For information on how to start and validate the core HTCondor-CE services, please refer to the upstream documentation","title":"Starting and Validating HTCondor-CE"},{"location":"compute-element/install-htcondor-ce/#troubleshooting-htcondor-ce","text":"For information on how to troubleshoot your HTCondor-CE, please refer to the upstream documentation: Common issues Debugging tools Helpful logs","title":"Troubleshooting HTCondor-CE"},{"location":"compute-element/install-htcondor-ce/#registering-the-ce","text":"To contribute capacity, your CE must be registered with the OSG Consortium . To register your resource: Identify the facility, site, and resource group where your HTCondor-CE is hosted. For example, the Center for High Throughput Computing at the University of Wisconsin-Madison uses the following information: Facility: University of Wisconsin Site: CHTC Resource Group: CHTC Using the above information, create or update the appropriate YAML file, using this template as a guide.","title":"Registering the CE"},{"location":"compute-element/install-htcondor-ce/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"compute-element/job-router-recipes/","text":"Up-to-date documentation can be found at https://osg-htc.org/docs/compute-element/install-htcondor-ce/","title":"Job router recipes"},{"location":"compute-element/slurm-recipes/","text":"Slurm Configuration Recipes \u00b6 This document contains examples of common Slurm configurations used by sites to contribute capacity to the OSPool. Contributing X% of Your Cluster \u00b6 To contribute a percentage of your Slurm cluster to the OSPool, set aside a number of whole nodes for a dedicated OSPool partition : Determine the percentage of your cluster that you would like to contribute and use that to calculate the number of cores to meet that percentage Select nodes and sum the number of cores to meet your desired contribution In slurm.conf , configure the NodeName for each type of chassis and assign specific nodes to PartitionName=ospool For example, if your cluster is 5120 cores and you wanted to contribute 10% of the cluster to the OSPool, your slurm.conf could contain the following: # Dell PowerEdge C6525, AMD EPYC 7513 32-Core Processor @ 2.6GHz NodeName=spark-a[002-004,006-028] CPUs=64 Boards=1 SocketsPerBoard=2 CoresPerSocket=32 ThreadsPerCore=1 RealMemory=256000 State=UNKNOWN Features=amd,avx,avx2 # Dell PowerEdge R6525, AMD EPYC 7763 64-Core Processor NodeName=spark-a[029-071,204-206] CPUs=128 Boards=1 SocketsPerBoard=2 CoresPerSocket=64 ThreadsPerCore=1 RealMemory=512000 State=UNKNOWN Features=amd,avx,avx2 # OSPool Partition, -- 10% of Shared is approx 512 cores; 6x64cores + 1x128 cores = 512 PartitionName=ospool State=UP Nodes=spark-a[002-004,006-008,029] DefaultTime=0-04:00:00 MaxTime=1-00:00:00 PreemptMode=OFF Priority=50 AllowGroups=slurm-admin,osg01","title":"Slurm recipes"},{"location":"compute-element/slurm-recipes/#slurm-configuration-recipes","text":"This document contains examples of common Slurm configurations used by sites to contribute capacity to the OSPool.","title":"Slurm Configuration Recipes"},{"location":"compute-element/slurm-recipes/#contributing-x-of-your-cluster","text":"To contribute a percentage of your Slurm cluster to the OSPool, set aside a number of whole nodes for a dedicated OSPool partition : Determine the percentage of your cluster that you would like to contribute and use that to calculate the number of cores to meet that percentage Select nodes and sum the number of cores to meet your desired contribution In slurm.conf , configure the NodeName for each type of chassis and assign specific nodes to PartitionName=ospool For example, if your cluster is 5120 cores and you wanted to contribute 10% of the cluster to the OSPool, your slurm.conf could contain the following: # Dell PowerEdge C6525, AMD EPYC 7513 32-Core Processor @ 2.6GHz NodeName=spark-a[002-004,006-028] CPUs=64 Boards=1 SocketsPerBoard=2 CoresPerSocket=32 ThreadsPerCore=1 RealMemory=256000 State=UNKNOWN Features=amd,avx,avx2 # Dell PowerEdge R6525, AMD EPYC 7763 64-Core Processor NodeName=spark-a[029-071,204-206] CPUs=128 Boards=1 SocketsPerBoard=2 CoresPerSocket=64 ThreadsPerCore=1 RealMemory=512000 State=UNKNOWN Features=amd,avx,avx2 # OSPool Partition, -- 10% of Shared is approx 512 cores; 6x64cores + 1x128 cores = 512 PartitionName=ospool State=UP Nodes=spark-a[002-004,006-008,029] DefaultTime=0-04:00:00 MaxTime=1-00:00:00 PreemptMode=OFF Priority=50 AllowGroups=slurm-admin,osg01","title":"Contributing X% of Your Cluster"},{"location":"compute-element/submit-htcondor-ce/","text":"Up-to-date documentation can be found at https://osg-htc.org/docs/compute-element/install-htcondor-ce/","title":"Submit htcondor ce"},{"location":"compute-element/troubleshoot-htcondor-ce/","text":"Up-to-date documentation can be found at https://osg-htc.org/docs/compute-element/install-htcondor-ce/","title":"Troubleshoot htcondor ce"},{"location":"data/external-oasis-repos/","text":"Install an OASIS Repository \u00b6 OASIS (the OSG A pplication S oftware I nstallation S ervice) is an infrastructure, based on CVMFS , for distributing software throughout the OSG. Once software is installed into an OASIS repository, the goal is to make it available across about 90% of the OSG within an hour. OASIS consists of keysigning infrastructure, a content distribution network (CDN), and a shared CVMFS repository that is hosted by the OSG. Many use cases will be covered by utilizing the shared repository ; this document covers how to install, configure, and host your own CVMFS repository server . This server will distribute software via OASIS, but will be hosted and operated externally from the OSG project. OASIS-based distribution and key signing is available to OSG VOs or repositories affiliated with an OSG VO. See the policy page for more information on what repositories OSG is willing to distribute. Before Starting \u00b6 The host OS must be: RHEL7 or RHEL8 (or equivalent). Additionally, User IDs: If it does not exist already, the installation will create the cvmfs Linux user Group IDs: If they do not exist already, the installation will create the Linux groups cvmfs and fuse Network ports: This page will configure the repository to distribute using Apache HTTPD on port 8000. At the minimum, the repository needs in-bound access from the OASIS CDN. Disk space: This host will need enough free disk space to host two copies of the software: one compressed and one uncompressed. /srv/cvmfs will hold all the published data (compressed and de-deuplicated). The /var/spool/cvmfs directory will contain all the data in all current transactions (uncompressed). Root access will be needed to install. Installation of software into the repository itself will be done as an unprivileged user. Yum will need to be configured to use the OSG repositories . Overlay-FS limitations CVMFS on RHEL7 only supports Overlay-FS if the underlying filesystem is ext3 or ext4 ; make sure /var/spool/cvmfs is one of these filesystem types. If this is not possible, add CVMFS_DONT_CHECK_OVERLAYFS_VERSION=yes to your CVMFS configuration. Using xfs will work if it was created with ftype=1 Installation \u00b6 Installation is a straightforward install via yum : root@host # yum install cvmfs-server osg-oasis Apache and Repository Mounts \u00b6 For all installs, we recommend mounting all the local repositories on startup: root@host # echo \"cvmfs_server mount -a\" >>/etc/rc.local root@host # chmod +x /etc/rc.local The Apache HTTPD service should be configured to listen on port 8000, have the KeepAlive option enabled, and be started: root@host # echo Listen 8000 >>/etc/httpd/conf.d/cvmfs.conf root@host # echo KeepAlive on >>/etc/httpd/conf.d/cvmfs.conf root@host # chkconfig httpd on root@host # service httpd start Check Firewalls Make sure that port 8000 is available to the Internet. Check the setting of the host- and site-level firewalls. The next steps will fail if the web server is not accessible. Creating a Repository \u00b6 Prior to creation, the repository administrator will need to make two decisions: Select a repository name ; typically, this is derived from the VO or project's name and ends in opensciencegrid.org . For example, the NoVA VO runs the repository nova.opensciencegrid.org . For this section, we will use . Select a repository owner : Software publication will need to run by a non- root Unix user account; for this document, we will use as the account name of the repository owner. The initial repository creation must be run as root : root@host # echo -e \"\\*\\\\t\\\\t-\\\\tnofile\\\\t\\\\t16384\" >>/etc/security/limits.conf root@host # ulimit -n 16384 root@host # cvmfs_server mkfs -o root@host # cat >/srv/cvmfs//.htaccess <>/etc/cvmfs/repositories.d//server.conf <>/etc/cvmfs/repositories.d//server.conf </.cvmfswhitelist | cat -v That should print several lines including some gibberish at the end. Hosting a Repository on OASIS \u00b6 In order to host a repository on OASIS, perform the following steps: Verify your VO's registration is up-to-date . All repositories need to be associated with a VO; the VO needs to assign an OASIS manager in Topology who would be responsible for the contents of any of the VO's repositories and will be contacted in case of issues. To designate an OASIS manager, have the VO manager update the Topology registration . Send a message to OSG support using the following template: Please add a new CVMFS repository to OASIS for VO using the URL http://:8000/cvmfs/ The VO responsible manager will be . Replace the items with the appropriate values. If the repository name matches *.opensciencegrid.org or *.osgstorage.org , wait for the go-ahead from the OSG representative before continuing with the remaining instructions; for all other repositories (such as *.egi.eu ), you are done. When you are told in the ticket to proceed to the next step, first if the repository might be in a transaction abort it: root@host # su -c \"cvmfs_server abort \" Then execute the following commands: root@host # wget -O /srv/cvmfs//.cvmfswhitelist \\ http://oasis.opensciencegrid.org/cvmfs//.cvmfswhitelist root@host # cp /etc/cvmfs/keys/opensciencegrid.org/opensciencegrid.org.pub \\ /etc/cvmfs/keys/.pub Replace as appropriate. If the cp command prompts about overwriting an existing file, type 'y'. Verify that publishing operation succeeds: root@host # su -c \"cvmfs_server transaction \" root@host # su -c \"cvmfs_server publish \" Within an hour, the repository updates should appear at the OSG Operations and FNAL Stratum-1 servers. On success, make sure the whitelist update happens daily by creating /etc/cron.d/fetch-cvmfs-whitelist with the following contents: 5 4 * * * cd /srv/cvmfs/ && wget -qO .cvmfswhitelist.new http://oasis.opensciencegrid.org/cvmfs//.cvmfswhitelist && mv .cvmfswhitelist.new .cvmfswhitelist Note This cronjob eliminates the need for the repository service administrator to periodically use cvmfs_server resign to update .cvmfswhitelist as described in the upstream CVMFS documentation. Update the open support ticket to indicate that the previous steps have been completed Once the repository is fully replicated on the OSG, the VO may proceed in publishing into CVMFS using the account on the repository server. Tip We strongly recommend the repository maintainer read through the upstream documentation on maintaining repositories and content limitations . Finally, if the new repository will be used outside of the U.S., the VO should open a GGUS ticket following EGI's PROC20 to get the repository replicated onto worldwide Stratum 1s. Replacing an Existing OASIS Repository Server \u00b6 If a need arises to replace a server for an existing *.opensciencegrid.org or *.osgstorage.org repository, there are two ways to do it: one without changing the DNS name and one with changing it. The latter can take longer because it requires OSG Operations intervention. Revision numbers must increase CVMFS does not allow repository revision numbers to decrease, so the instructions below make sure the revision numbers only go up. Without changing the server DNS name \u00b6 If you are recreating the repository on the same machine, use the following command to remove the repository configuration while preserving the data and keys: root@host # cvmfs_server rmfs -p Otherwise if it is a new machine, copy the keys from /etc/cvmfs/keys/ .* and the data from /srv/cvmfs/ from the old server to the new, making sure that no publish operations happen on the old server while you copy the data. Then in either case use cvmfs_server import instead of cvmfs_server mkfs in the above instructions for Creating the Repository , in order to reuse old data and keys. Note that you wil need to reapply any custom configuration changes under /etc/cvmfs/repositories.d/ ` that was on the old server. If you run an old and a new machine in parallel for a while, make sure that when you put the new machine into production (by moving the DNS name) that the new machine has had at least as many publishes as the old machine, so the revision number does not decrease. With changing the server DNS name \u00b6 Note If you create a repository from scratch, as opposed to copying the data and keys from an old server, it is in fact better to change the DNS name of the server because that causes the OSG Operations server to reinitialize the .cvmfswhitelist. If you create a replacement repository on a new machine from scratch, follow the normal instructions on this page above, but with the following differences in the Hosting a Repository on OASIS section: In step 2, instead of asking in the support ticket to create a new repository, give the new URL and ask them to change the repository registration to that URL. When you do the publish in step 5, add a -n NNNN option where NNNN is a revision number greater than the number on the existing repository. That number can be found by this command on a client machine: user@host $ attr -qg revision /cvmfs/ Skip step 6; there is no need to tell OSG Operations when you are finished. After enough time has elapsed for the publish to propagate to clients, typically around 15 minutes, verify that the new chosen revision has reached a client. Removing a Repository from OASIS \u00b6 In order to remove a repository that is being hosted on OASIS, perform the following steps: If the repository has been replicated outside of the U.S., open a GGUS ticket assigned to support unit \"Software and Data Distribution (CVMFS)\" asking that the replication be removed from EGI Stratum-1s. Remind them in the ticket that there are worldwide Stratum-1s that automatically replicate all OSG repositories that RAL replicates, so those Stratum-1s cannot remove their replicas before RAL does but their administrators will need to be notified to remove their replicas within 8 hours after RAL does to avoid alarms. Wait until this ticket is resolved before proceeding. Open a support ticket asking to shut down the repository, giving the repository name (e.g., ), and the corresponding VO.","title":"Install an OASIS Repo"},{"location":"data/external-oasis-repos/#install-an-oasis-repository","text":"OASIS (the OSG A pplication S oftware I nstallation S ervice) is an infrastructure, based on CVMFS , for distributing software throughout the OSG. Once software is installed into an OASIS repository, the goal is to make it available across about 90% of the OSG within an hour. OASIS consists of keysigning infrastructure, a content distribution network (CDN), and a shared CVMFS repository that is hosted by the OSG. Many use cases will be covered by utilizing the shared repository ; this document covers how to install, configure, and host your own CVMFS repository server . This server will distribute software via OASIS, but will be hosted and operated externally from the OSG project. OASIS-based distribution and key signing is available to OSG VOs or repositories affiliated with an OSG VO. See the policy page for more information on what repositories OSG is willing to distribute.","title":"Install an OASIS Repository"},{"location":"data/external-oasis-repos/#before-starting","text":"The host OS must be: RHEL7 or RHEL8 (or equivalent). Additionally, User IDs: If it does not exist already, the installation will create the cvmfs Linux user Group IDs: If they do not exist already, the installation will create the Linux groups cvmfs and fuse Network ports: This page will configure the repository to distribute using Apache HTTPD on port 8000. At the minimum, the repository needs in-bound access from the OASIS CDN. Disk space: This host will need enough free disk space to host two copies of the software: one compressed and one uncompressed. /srv/cvmfs will hold all the published data (compressed and de-deuplicated). The /var/spool/cvmfs directory will contain all the data in all current transactions (uncompressed). Root access will be needed to install. Installation of software into the repository itself will be done as an unprivileged user. Yum will need to be configured to use the OSG repositories . Overlay-FS limitations CVMFS on RHEL7 only supports Overlay-FS if the underlying filesystem is ext3 or ext4 ; make sure /var/spool/cvmfs is one of these filesystem types. If this is not possible, add CVMFS_DONT_CHECK_OVERLAYFS_VERSION=yes to your CVMFS configuration. Using xfs will work if it was created with ftype=1","title":"Before Starting"},{"location":"data/external-oasis-repos/#installation","text":"Installation is a straightforward install via yum : root@host # yum install cvmfs-server osg-oasis","title":"Installation"},{"location":"data/external-oasis-repos/#apache-and-repository-mounts","text":"For all installs, we recommend mounting all the local repositories on startup: root@host # echo \"cvmfs_server mount -a\" >>/etc/rc.local root@host # chmod +x /etc/rc.local The Apache HTTPD service should be configured to listen on port 8000, have the KeepAlive option enabled, and be started: root@host # echo Listen 8000 >>/etc/httpd/conf.d/cvmfs.conf root@host # echo KeepAlive on >>/etc/httpd/conf.d/cvmfs.conf root@host # chkconfig httpd on root@host # service httpd start Check Firewalls Make sure that port 8000 is available to the Internet. Check the setting of the host- and site-level firewalls. The next steps will fail if the web server is not accessible.","title":"Apache and Repository Mounts"},{"location":"data/external-oasis-repos/#creating-a-repository","text":"Prior to creation, the repository administrator will need to make two decisions: Select a repository name ; typically, this is derived from the VO or project's name and ends in opensciencegrid.org . For example, the NoVA VO runs the repository nova.opensciencegrid.org . For this section, we will use . Select a repository owner : Software publication will need to run by a non- root Unix user account; for this document, we will use as the account name of the repository owner. The initial repository creation must be run as root : root@host # echo -e \"\\*\\\\t\\\\t-\\\\tnofile\\\\t\\\\t16384\" >>/etc/security/limits.conf root@host # ulimit -n 16384 root@host # cvmfs_server mkfs -o root@host # cat >/srv/cvmfs//.htaccess <>/etc/cvmfs/repositories.d//server.conf <>/etc/cvmfs/repositories.d//server.conf </.cvmfswhitelist | cat -v That should print several lines including some gibberish at the end.","title":"Creating a Repository"},{"location":"data/external-oasis-repos/#hosting-a-repository-on-oasis","text":"In order to host a repository on OASIS, perform the following steps: Verify your VO's registration is up-to-date . All repositories need to be associated with a VO; the VO needs to assign an OASIS manager in Topology who would be responsible for the contents of any of the VO's repositories and will be contacted in case of issues. To designate an OASIS manager, have the VO manager update the Topology registration . Send a message to OSG support using the following template: Please add a new CVMFS repository to OASIS for VO using the URL http://:8000/cvmfs/ The VO responsible manager will be . Replace the items with the appropriate values. If the repository name matches *.opensciencegrid.org or *.osgstorage.org , wait for the go-ahead from the OSG representative before continuing with the remaining instructions; for all other repositories (such as *.egi.eu ), you are done. When you are told in the ticket to proceed to the next step, first if the repository might be in a transaction abort it: root@host # su -c \"cvmfs_server abort \" Then execute the following commands: root@host # wget -O /srv/cvmfs//.cvmfswhitelist \\ http://oasis.opensciencegrid.org/cvmfs//.cvmfswhitelist root@host # cp /etc/cvmfs/keys/opensciencegrid.org/opensciencegrid.org.pub \\ /etc/cvmfs/keys/.pub Replace as appropriate. If the cp command prompts about overwriting an existing file, type 'y'. Verify that publishing operation succeeds: root@host # su -c \"cvmfs_server transaction \" root@host # su -c \"cvmfs_server publish \" Within an hour, the repository updates should appear at the OSG Operations and FNAL Stratum-1 servers. On success, make sure the whitelist update happens daily by creating /etc/cron.d/fetch-cvmfs-whitelist with the following contents: 5 4 * * * cd /srv/cvmfs/ && wget -qO .cvmfswhitelist.new http://oasis.opensciencegrid.org/cvmfs//.cvmfswhitelist && mv .cvmfswhitelist.new .cvmfswhitelist Note This cronjob eliminates the need for the repository service administrator to periodically use cvmfs_server resign to update .cvmfswhitelist as described in the upstream CVMFS documentation. Update the open support ticket to indicate that the previous steps have been completed Once the repository is fully replicated on the OSG, the VO may proceed in publishing into CVMFS using the account on the repository server. Tip We strongly recommend the repository maintainer read through the upstream documentation on maintaining repositories and content limitations . Finally, if the new repository will be used outside of the U.S., the VO should open a GGUS ticket following EGI's PROC20 to get the repository replicated onto worldwide Stratum 1s.","title":"Hosting a Repository on OASIS"},{"location":"data/external-oasis-repos/#replacing-an-existing-oasis-repository-server","text":"If a need arises to replace a server for an existing *.opensciencegrid.org or *.osgstorage.org repository, there are two ways to do it: one without changing the DNS name and one with changing it. The latter can take longer because it requires OSG Operations intervention. Revision numbers must increase CVMFS does not allow repository revision numbers to decrease, so the instructions below make sure the revision numbers only go up.","title":"Replacing an Existing OASIS Repository Server"},{"location":"data/external-oasis-repos/#without-changing-the-server-dns-name","text":"If you are recreating the repository on the same machine, use the following command to remove the repository configuration while preserving the data and keys: root@host # cvmfs_server rmfs -p Otherwise if it is a new machine, copy the keys from /etc/cvmfs/keys/ .* and the data from /srv/cvmfs/ from the old server to the new, making sure that no publish operations happen on the old server while you copy the data. Then in either case use cvmfs_server import instead of cvmfs_server mkfs in the above instructions for Creating the Repository , in order to reuse old data and keys. Note that you wil need to reapply any custom configuration changes under /etc/cvmfs/repositories.d/ ` that was on the old server. If you run an old and a new machine in parallel for a while, make sure that when you put the new machine into production (by moving the DNS name) that the new machine has had at least as many publishes as the old machine, so the revision number does not decrease.","title":"Without changing the server DNS name"},{"location":"data/external-oasis-repos/#with-changing-the-server-dns-name","text":"Note If you create a repository from scratch, as opposed to copying the data and keys from an old server, it is in fact better to change the DNS name of the server because that causes the OSG Operations server to reinitialize the .cvmfswhitelist. If you create a replacement repository on a new machine from scratch, follow the normal instructions on this page above, but with the following differences in the Hosting a Repository on OASIS section: In step 2, instead of asking in the support ticket to create a new repository, give the new URL and ask them to change the repository registration to that URL. When you do the publish in step 5, add a -n NNNN option where NNNN is a revision number greater than the number on the existing repository. That number can be found by this command on a client machine: user@host $ attr -qg revision /cvmfs/ Skip step 6; there is no need to tell OSG Operations when you are finished. After enough time has elapsed for the publish to propagate to clients, typically around 15 minutes, verify that the new chosen revision has reached a client.","title":"With changing the server DNS name"},{"location":"data/external-oasis-repos/#removing-a-repository-from-oasis","text":"In order to remove a repository that is being hosted on OASIS, perform the following steps: If the repository has been replicated outside of the U.S., open a GGUS ticket assigned to support unit \"Software and Data Distribution (CVMFS)\" asking that the replication be removed from EGI Stratum-1s. Remind them in the ticket that there are worldwide Stratum-1s that automatically replicate all OSG repositories that RAL replicates, so those Stratum-1s cannot remove their replicas before RAL does but their administrators will need to be notified to remove their replicas within 8 hours after RAL does to avoid alarms. Wait until this ticket is resolved before proceeding. Open a support ticket asking to shut down the repository, giving the repository name (e.g., ), and the corresponding VO.","title":"Removing a Repository from OASIS"},{"location":"data/frontier-squid/","text":"Install the Frontier Squid HTTP Caching Proxy \u00b6 Frontier Squid is a distribution of the well-known squid HTTP caching proxy software that is optimized for use with applications on the Worldwide LHC Computing Grid (WLCG). It has many advantages over regular squid for common distributed computing applications, especially Frontier and CVMFS. The OSG distribution of frontier-squid is a straight rebuild of the upstream frontier-squid package for the convenience of OSG users. This document is intended for System Administrators who are installing frontier-squid , the OSG distribution of the Frontier Squid software. Frontier Squid Is Recommended \u00b6 OSG recommends that all sites run a caching proxy for HTTP and HTTPS to help reduce bandwidth and improve throughput. To that end, Compute Element (CE) installations include Frontier Squid automatically. We encourage all sites to configure and use this service, as described below. For large sites that expect heavy load on the proxy, it is best to run the proxy on its own host. If you are unsure if your site qualifies, we recommend initially running the proxy on your CE host and monitoring its bandwidth. If the network usage regularly peaks at over one third of the bandwidth capacity, move the proxy to a new host. Before Starting \u00b6 Before starting the installation process, consider the following points (consulting the Reference section below as needed): Hardware requirements: If you will be supporting the Frontier application at your site, review the hardware recommendations to determine how to size your equipment. User IDs: If it does not exist already, the installation will create the squid Linux user Network ports: Clients within your cluster (e.g., OSG user jobs) will communicate with Frontier Squid on port 3128 (TCP). Additionally, central infrastructure will monitor Frontier Squid through port 3401 (UDP); see this section for more details. As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Installing Frontier Squid \u00b6 To install Frontier Squid, make sure that your host is up to date before installing the required packages: Clean yum cache: root@host # yum clean all --enablerepo = * Update software: root@host # yum update This command will update all packages Install Frontier Squid: root@host # yum install frontier-squid Configuring Frontier Squid \u00b6 Configuring the Frontier Squid Service \u00b6 To configure the Frontier Squid service itself: Follow the Configuration section of the upstream Frontier Squid documentation . Enable, start, and test the service (as described below). Register the squid (also as described below ). Note An important difference between the standard Squid software and the Frontier Squid variant is that Frontier Squid changes are in /etc/squid/customize.sh instead of /etc/squid/squid.conf . Configuring the OSG CE \u00b6 To configure the OSG Compute Entrypoint (CE) to know about your Frontier Squid service: On your CE host (which may be different than your Frontier Squid host), edit /etc/osg/config.d/01-squid.ini Make sure that enabled is set to True Set location to the hostname and port of your Frontier Squid service (e.g., my.squid.host.edu:3128 ) Leave the other settings at DEFAULT unless you have specific reasons to change them Run osg-configure -c to propagate the changes on your CE. Note You may want to finish other CE configuration tasks before running osg-configure . Just be sure to run it once before starting CE services. Using Frontier-Squid \u00b6 Start the frontier-squid service and enable it to start at boot time. As a reminder, here are common service commands (all run as root ): To... Run the command... Start the service systemctl start frontier-squid Stop the service systemctl stop frontier-squid Enable the service to start on boot systemctl enable frontier-squid Disable the service from starting on boot systemctl disable frontier-squid Validating Frontier Squid \u00b6 As any user on another computer, do the following (where is the fully qualified domain name of your squid server): user@host $ export http_proxy = http://:3128 user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: MISS from user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: HIT from If the grep doesn't print anything, try removing it from the pipeline to see if errors are obvious. If the second try says MISS again, something is probably wrong with the squid cache writes. Look at the squid access.log file to try to see what's wrong. Registering Frontier Squid \u00b6 To register your Frontier Squid host, follow the general registration instructions here with the following Frontier Squid-specific details. Alternatively, contact us for assistance with the registration process. Add a Squid: section to the Services: list, with any relevant fields for that service. This is a partial example: ... FQDN: Services: Squid: Description: Generic squid service ... Replacing with your Frontier Squid server's DNS entry or in the case of multiple Frontier Squid servers for a single resource, the round-robin DNS entry. See the BNL_ATLAS_Frontier_Squid for a complete example. Normally registered squids will be monitored by WLCG. This is strongly recommended even for non-WLCG sites so operations experts can help with diagnosing problems. However, if a site declines monitoring, that can be indicated by setting Monitored: false in a Details: section below Description: . Registration is still important for the sake of excluding squids from worker node failover monitors. The default if Details: Monitored: is not set is true . If you set Monitored to true, also enable monitoring as described in the upstream documentation on enabling monitoring . A few hours after a squid is registered and marked Active (and not marked Monitored: false ), verify that it is monitored by WLCG . Reference \u00b6 Users \u00b6 The frontier-squid installation will create one user account unless it already exists. User Comment squid Reduced privilege user that the squid process runs under. Set the default gid of the \"squid\" user to be a group that is also called \"squid\". The package can instead use another user name of your choice if you create a configuration file before installation. Details are in the upstream documentation Preparation section . Networking \u00b6 Open the following ports on your Frontier Squid hosts: Port Number Protocol WAN LAN Comment 3128 tcp \u2713 Also limited in squid ACLs. Should be limited to access from your worker nodes 3401 udp \u2713 Also limited in squid ACLs. Should be limited to public monitoring server addresses The addresses of the WLCG monitoring servers for use in firewalls are listed in the upstream documentation Enabling monitoring section . Frontier Squid Log Files \u00b6 Log file contents are explained in the upstream documentation Log file contents section .","title":"Install Frontier Squid RPM"},{"location":"data/frontier-squid/#install-the-frontier-squid-http-caching-proxy","text":"Frontier Squid is a distribution of the well-known squid HTTP caching proxy software that is optimized for use with applications on the Worldwide LHC Computing Grid (WLCG). It has many advantages over regular squid for common distributed computing applications, especially Frontier and CVMFS. The OSG distribution of frontier-squid is a straight rebuild of the upstream frontier-squid package for the convenience of OSG users. This document is intended for System Administrators who are installing frontier-squid , the OSG distribution of the Frontier Squid software.","title":"Install the Frontier Squid HTTP Caching Proxy"},{"location":"data/frontier-squid/#frontier-squid-is-recommended","text":"OSG recommends that all sites run a caching proxy for HTTP and HTTPS to help reduce bandwidth and improve throughput. To that end, Compute Element (CE) installations include Frontier Squid automatically. We encourage all sites to configure and use this service, as described below. For large sites that expect heavy load on the proxy, it is best to run the proxy on its own host. If you are unsure if your site qualifies, we recommend initially running the proxy on your CE host and monitoring its bandwidth. If the network usage regularly peaks at over one third of the bandwidth capacity, move the proxy to a new host.","title":"Frontier Squid Is Recommended"},{"location":"data/frontier-squid/#before-starting","text":"Before starting the installation process, consider the following points (consulting the Reference section below as needed): Hardware requirements: If you will be supporting the Frontier application at your site, review the hardware recommendations to determine how to size your equipment. User IDs: If it does not exist already, the installation will create the squid Linux user Network ports: Clients within your cluster (e.g., OSG user jobs) will communicate with Frontier Squid on port 3128 (TCP). Additionally, central infrastructure will monitor Frontier Squid through port 3401 (UDP); see this section for more details. As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories","title":"Before Starting"},{"location":"data/frontier-squid/#installing-frontier-squid","text":"To install Frontier Squid, make sure that your host is up to date before installing the required packages: Clean yum cache: root@host # yum clean all --enablerepo = * Update software: root@host # yum update This command will update all packages Install Frontier Squid: root@host # yum install frontier-squid","title":"Installing Frontier Squid"},{"location":"data/frontier-squid/#configuring-frontier-squid","text":"","title":"Configuring Frontier Squid"},{"location":"data/frontier-squid/#configuring-the-frontier-squid-service","text":"To configure the Frontier Squid service itself: Follow the Configuration section of the upstream Frontier Squid documentation . Enable, start, and test the service (as described below). Register the squid (also as described below ). Note An important difference between the standard Squid software and the Frontier Squid variant is that Frontier Squid changes are in /etc/squid/customize.sh instead of /etc/squid/squid.conf .","title":"Configuring the Frontier Squid Service"},{"location":"data/frontier-squid/#configuring-the-osg-ce","text":"To configure the OSG Compute Entrypoint (CE) to know about your Frontier Squid service: On your CE host (which may be different than your Frontier Squid host), edit /etc/osg/config.d/01-squid.ini Make sure that enabled is set to True Set location to the hostname and port of your Frontier Squid service (e.g., my.squid.host.edu:3128 ) Leave the other settings at DEFAULT unless you have specific reasons to change them Run osg-configure -c to propagate the changes on your CE. Note You may want to finish other CE configuration tasks before running osg-configure . Just be sure to run it once before starting CE services.","title":"Configuring the OSG CE"},{"location":"data/frontier-squid/#using-frontier-squid","text":"Start the frontier-squid service and enable it to start at boot time. As a reminder, here are common service commands (all run as root ): To... Run the command... Start the service systemctl start frontier-squid Stop the service systemctl stop frontier-squid Enable the service to start on boot systemctl enable frontier-squid Disable the service from starting on boot systemctl disable frontier-squid","title":"Using Frontier-Squid"},{"location":"data/frontier-squid/#validating-frontier-squid","text":"As any user on another computer, do the following (where is the fully qualified domain name of your squid server): user@host $ export http_proxy = http://:3128 user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: MISS from user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: HIT from If the grep doesn't print anything, try removing it from the pipeline to see if errors are obvious. If the second try says MISS again, something is probably wrong with the squid cache writes. Look at the squid access.log file to try to see what's wrong.","title":"Validating Frontier Squid"},{"location":"data/frontier-squid/#registering-frontier-squid","text":"To register your Frontier Squid host, follow the general registration instructions here with the following Frontier Squid-specific details. Alternatively, contact us for assistance with the registration process. Add a Squid: section to the Services: list, with any relevant fields for that service. This is a partial example: ... FQDN: Services: Squid: Description: Generic squid service ... Replacing with your Frontier Squid server's DNS entry or in the case of multiple Frontier Squid servers for a single resource, the round-robin DNS entry. See the BNL_ATLAS_Frontier_Squid for a complete example. Normally registered squids will be monitored by WLCG. This is strongly recommended even for non-WLCG sites so operations experts can help with diagnosing problems. However, if a site declines monitoring, that can be indicated by setting Monitored: false in a Details: section below Description: . Registration is still important for the sake of excluding squids from worker node failover monitors. The default if Details: Monitored: is not set is true . If you set Monitored to true, also enable monitoring as described in the upstream documentation on enabling monitoring . A few hours after a squid is registered and marked Active (and not marked Monitored: false ), verify that it is monitored by WLCG .","title":"Registering Frontier Squid"},{"location":"data/frontier-squid/#reference","text":"","title":"Reference"},{"location":"data/frontier-squid/#users","text":"The frontier-squid installation will create one user account unless it already exists. User Comment squid Reduced privilege user that the squid process runs under. Set the default gid of the \"squid\" user to be a group that is also called \"squid\". The package can instead use another user name of your choice if you create a configuration file before installation. Details are in the upstream documentation Preparation section .","title":"Users"},{"location":"data/frontier-squid/#networking","text":"Open the following ports on your Frontier Squid hosts: Port Number Protocol WAN LAN Comment 3128 tcp \u2713 Also limited in squid ACLs. Should be limited to access from your worker nodes 3401 udp \u2713 Also limited in squid ACLs. Should be limited to public monitoring server addresses The addresses of the WLCG monitoring servers for use in firewalls are listed in the upstream documentation Enabling monitoring section .","title":"Networking"},{"location":"data/frontier-squid/#frontier-squid-log-files","text":"Log file contents are explained in the upstream documentation Log file contents section .","title":"Frontier Squid Log Files"},{"location":"data/run-frontier-squid-container/","text":"Running Frontier Squid in a Container \u00b6 Frontier Squid is a distribution of the well-known squid HTTP caching proxy software that is optimized for use with applications on the Worldwide LHC Computing Grid (WLCG). It has many advantages over regular squid for common distributed computing applications, especially Frontier and CVMFS. The OSG distribution of frontier-squid is a straight rebuild of the upstream frontier-squid package for the convenience of OSG users. Tip OSG recommends that all sites run a caching proxy for HTTP to help reduce bandwidth and improve throughput. This document outlines how to run Frontier Squid in a Docker container. Before Starting \u00b6 Before starting the installation process, consider the following points (consulting the Frontier Squid Reference section as needed): Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group). Network ports: Frontier squid communicates on ports 3128 (TCP) and 3401 (UDP). We encourage sites to allow monitoring on port 3401 via UDP from CERN IP address ranges, 128.142.0.0/16, 188.184.128.0/17, 188.185.48.0/20 and 188.185.128.0/17. See the CERN monitoring documentation for additional details. If outgoing connections are filtered, note that CVMFS always uses ports 8000, 80, or 8080. Host choice: If you will be supporting the Frontier application at your site, review the upstream documentation to determine how to size your equipment. Configuring Squid \u00b6 Environment variables (optional) \u00b6 In addition to the required configuration above (ports and file systems), you may also configure the behavior of your cache with the following environment variables: Variable name Description Defaults SQUID_IPRANGE Limits the incoming connections to the provided whitelist. By default only standard private network addresses are whitelisted. SQUID_CACHE_DISK Sets the cache_dir option which determines the disk size squid uses. Must be an integer value, and its unit is MBs. Note: The cache disk area is located at /var/cache/squid. Defaults to 10000. SQUID_CACHE_MEM Sets the cache_mem option which regulates the size squid reserves for caching small objects in memory. Includes a space and unit, e.g. \"MB\". Defaults to \"128 MB\". Cache Disk Size For production deployments, OSG recommends allocating at least 50 to 100 GB (50000 to 100000 MB) to SQUID_CACHE_DISK. Mount points \u00b6 In order to preserve the cache between redeployments, you should map the following areas to persistent storage outside the container: Mountpoint Description Example docker mount /var/cache/squid This directory contains the cache for squid. See also SQUID_CACHE_DISK above. -v /tmp/squid:/var/cache/squid /var/log/squid This directory contains the squid logs. -v /tmp/log:/var/log/squid For more details, see the Frontier Squid documentation . Configuration customization (optional) \u00b6 More complicated configuration customization can be done by mounting .sh and .awk files into /etc/squid/customize.d. For details on the names and content of those files see the comments in the customization script and see the upstream documentation on configuration customization. Running a Frontier Squid Container \u00b6 To run a Frontier Squid container with the defaults: user@host $ docker run --rm --name frontier-squid \\ -v :/var/cache/squid \\ -v :/var/log/squid \\ -p :3128 opensciencegrid/frontier-squid:23-release You may pass configuration variables in KEY=VALUE format with either docker -e options or in a file specified with --env-file= . Running a Frontier Squid container with systemd \u00b6 An example systemd service file for Frontier Squid. This will require creating the environment file in the directory /opt/xcache/.env . Note This example systemd file assumes is 3128 and is /tmp/squid and is /tmp/log . Create the systemd service file /etc/systemd/system/docker.frontier-squid.service as follows: [Unit] Description=Stash Cache Container After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=-/usr/bin/docker stop %n ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/docker pull opensciencegrid/frontier-squid:23-release ExecStart=/usr/bin/docker run --rm --name %n --publish 3128:3128 -v /tmp/squid:/var/cache/squid -v /tmp/log:/var/log/squid --env-file /opt/xcache/.env opensciencegrid/frontier-squid:23-release [Install] WantedBy=multi-user.target Enable and start the service with: root@host $ systemctl enable docker.frontier-squid root@host $ systemctl start docker.frontier-squid Validating the Frontier Squid Cache \u00b6 The cache server functions as a normal HTTP server and can interact with typical HTTP clients, such as curl or wget . Here, is the port chosen in the docker run command, 3128 by default. user@host $ export http_proxy = http://localhost: user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: MISS from 797a56e426cf user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: HIT from 797a56e426cf Registering Frontier Squid \u00b6 See the Registering Frontier Squid instructions to register your Frontier Squid host. Getting Help \u00b6 To get assistance, please use the this page .","title":"Running Frontier Squid in a Container"},{"location":"data/run-frontier-squid-container/#running-frontier-squid-in-a-container","text":"Frontier Squid is a distribution of the well-known squid HTTP caching proxy software that is optimized for use with applications on the Worldwide LHC Computing Grid (WLCG). It has many advantages over regular squid for common distributed computing applications, especially Frontier and CVMFS. The OSG distribution of frontier-squid is a straight rebuild of the upstream frontier-squid package for the convenience of OSG users. Tip OSG recommends that all sites run a caching proxy for HTTP to help reduce bandwidth and improve throughput. This document outlines how to run Frontier Squid in a Docker container.","title":"Running Frontier Squid in a Container"},{"location":"data/run-frontier-squid-container/#before-starting","text":"Before starting the installation process, consider the following points (consulting the Frontier Squid Reference section as needed): Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group). Network ports: Frontier squid communicates on ports 3128 (TCP) and 3401 (UDP). We encourage sites to allow monitoring on port 3401 via UDP from CERN IP address ranges, 128.142.0.0/16, 188.184.128.0/17, 188.185.48.0/20 and 188.185.128.0/17. See the CERN monitoring documentation for additional details. If outgoing connections are filtered, note that CVMFS always uses ports 8000, 80, or 8080. Host choice: If you will be supporting the Frontier application at your site, review the upstream documentation to determine how to size your equipment.","title":"Before Starting"},{"location":"data/run-frontier-squid-container/#configuring-squid","text":"","title":"Configuring Squid"},{"location":"data/run-frontier-squid-container/#environment-variables-optional","text":"In addition to the required configuration above (ports and file systems), you may also configure the behavior of your cache with the following environment variables: Variable name Description Defaults SQUID_IPRANGE Limits the incoming connections to the provided whitelist. By default only standard private network addresses are whitelisted. SQUID_CACHE_DISK Sets the cache_dir option which determines the disk size squid uses. Must be an integer value, and its unit is MBs. Note: The cache disk area is located at /var/cache/squid. Defaults to 10000. SQUID_CACHE_MEM Sets the cache_mem option which regulates the size squid reserves for caching small objects in memory. Includes a space and unit, e.g. \"MB\". Defaults to \"128 MB\". Cache Disk Size For production deployments, OSG recommends allocating at least 50 to 100 GB (50000 to 100000 MB) to SQUID_CACHE_DISK.","title":"Environment variables (optional)"},{"location":"data/run-frontier-squid-container/#mount-points","text":"In order to preserve the cache between redeployments, you should map the following areas to persistent storage outside the container: Mountpoint Description Example docker mount /var/cache/squid This directory contains the cache for squid. See also SQUID_CACHE_DISK above. -v /tmp/squid:/var/cache/squid /var/log/squid This directory contains the squid logs. -v /tmp/log:/var/log/squid For more details, see the Frontier Squid documentation .","title":"Mount points"},{"location":"data/run-frontier-squid-container/#configuration-customization-optional","text":"More complicated configuration customization can be done by mounting .sh and .awk files into /etc/squid/customize.d. For details on the names and content of those files see the comments in the customization script and see the upstream documentation on configuration customization.","title":"Configuration customization (optional)"},{"location":"data/run-frontier-squid-container/#running-a-frontier-squid-container","text":"To run a Frontier Squid container with the defaults: user@host $ docker run --rm --name frontier-squid \\ -v :/var/cache/squid \\ -v :/var/log/squid \\ -p :3128 opensciencegrid/frontier-squid:23-release You may pass configuration variables in KEY=VALUE format with either docker -e options or in a file specified with --env-file= .","title":"Running a Frontier Squid Container"},{"location":"data/run-frontier-squid-container/#running-a-frontier-squid-container-with-systemd","text":"An example systemd service file for Frontier Squid. This will require creating the environment file in the directory /opt/xcache/.env . Note This example systemd file assumes is 3128 and is /tmp/squid and is /tmp/log . Create the systemd service file /etc/systemd/system/docker.frontier-squid.service as follows: [Unit] Description=Stash Cache Container After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=-/usr/bin/docker stop %n ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/docker pull opensciencegrid/frontier-squid:23-release ExecStart=/usr/bin/docker run --rm --name %n --publish 3128:3128 -v /tmp/squid:/var/cache/squid -v /tmp/log:/var/log/squid --env-file /opt/xcache/.env opensciencegrid/frontier-squid:23-release [Install] WantedBy=multi-user.target Enable and start the service with: root@host $ systemctl enable docker.frontier-squid root@host $ systemctl start docker.frontier-squid","title":"Running a Frontier Squid container with systemd"},{"location":"data/run-frontier-squid-container/#validating-the-frontier-squid-cache","text":"The cache server functions as a normal HTTP server and can interact with typical HTTP clients, such as curl or wget . Here, is the port chosen in the docker run command, 3128 by default. user@host $ export http_proxy = http://localhost: user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: MISS from 797a56e426cf user@host $ wget -qdO/dev/null http://frontier.cern.ch 2 > & 1 | grep X-Cache X-Cache: HIT from 797a56e426cf","title":"Validating the Frontier Squid Cache"},{"location":"data/run-frontier-squid-container/#registering-frontier-squid","text":"See the Registering Frontier Squid instructions to register your Frontier Squid host.","title":"Registering Frontier Squid"},{"location":"data/run-frontier-squid-container/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"data/update-oasis/","text":"Updating Software in OASIS \u00b6 OASIS is the OSG Application Software Installation Service that can be used to publish and update software on OSG Worker Nodes under /cvmfs/oasis.opensciencegrid.org . It is implemented using CernVM FileSystem (CVMFS) technology and is the recommended method to make software available to researchers in the OSG Consortium. This document is a step by step explanation of how a member of a Virtual Organization (VO) can become an OASIS manager for their VO and gain access to the shared OASIS service for software management. The shared OASIS service is especially appropropriate for VOs that have a relatively small number of members and a relatively small amount of software to distribute. Larger VOs should consider hosting their own separate repositories . Note For information on how to configure an OASIS client see the CVMFS installation documentation . Requirements \u00b6 To begin the process to distribute software on OASIS using the service, you must: Register as an OSG contact and upload your SSH Key . Submit a request to help@osg-htc.org to become an OASIS manager with the following: The names of the VO(s) whose software that you would like to manage with the shared OASIS login host The names of any other VO members that should be OASIS managers The name of a member of the VO(s) that can verify your affiliation, and Cc that person on your emailed request How to use OASIS \u00b6 Log in with SSH \u00b6 The shared OASIS login server is accessible via SSH for all OASIS managers with registered SSH keys: user@host $ ssh -i ouser.@oasis-login.opensciencegrid.org Change for the name of the Virtual Organization you are trying to access and with the path to the private part of the SSH key whose public part you registered with the OSG . Instead of putting -i or ouser.@ on the command line, you can put it in your ~/.ssh/config : Host oasis-login.opensciencegrid.org User ouser. IdentityFile Install and update software \u00b6 Once you log in, you can add/modify/remove content on a staging area at /stage/oasis/$VO where $VO is the name of the VO represented by the manager. Files here are visible to both oasis-login and the Stratum 0 server (oasis.opensciencegrid.org). There is a symbolic link at /cvmfs/oasis.opensciencegrid.org/$VO that points to the same staging area. Request an oasis publish with this command: user@host $ osg-oasis-update This command queues a process to sync the content of OASIS with the content of /stage/oasis/$VO osg-oasis-update returns immediately, but only one update can run at a time (across all VOs); your request may be queued behind a different VO. If you encounter severe delays before the update is finished being published (more than 4 hours), please file a support ticket . Limitations on repository content \u00b6 Although CVMFS provides a POSIX filesystem, it does not work well with all types of content. Content in OASIS is expected to adhere to the CVMFS repository content limitations so please review those guidelines carefully. Testing \u00b6 After osg-oasis-update completes and the changes have been propagated to the CVMFS stratum 1 servers (typically between 0 and 60 minutes, but possibly longer if the servers are busy with updates of other repositories) then the changes can be visible under /cvmfs/oasis.opensciencegrid.org on a computer that has the CVMFS client installed . A client normally only checks for updates if at least an hour has passed since it last checked, but people who have superuser access on the client machine can force it to check again with root@host # cvmfs_talk -i oasis.opensciencegrid.org remount This can be done while the filesystem is mounted (despite the name, it does not do an OS-level umount/mount of the filesystem). If the filesystem is not mounted, it will automatically check for new updates the next time it is mounted. In order to find out if an update has reached the CVMFS stratum 1 server, you can find out the latest osg-oasis-update time seen by the stratum 1 most favored by your CVMFS client with the following long command on your client machine: user@host $ date -d \"1970-1-1 GMT + $( wget -qO- $( attr -qg host /cvmfs/oasis.opensciencegrid.org ) /.cvmfspublished | \\ cat -v | sed -n '/^T/{s/^T//p;q;}' ) sec\" References \u00b6 CVMFS Documentation","title":"Update OASIS Shared Repo"},{"location":"data/update-oasis/#updating-software-in-oasis","text":"OASIS is the OSG Application Software Installation Service that can be used to publish and update software on OSG Worker Nodes under /cvmfs/oasis.opensciencegrid.org . It is implemented using CernVM FileSystem (CVMFS) technology and is the recommended method to make software available to researchers in the OSG Consortium. This document is a step by step explanation of how a member of a Virtual Organization (VO) can become an OASIS manager for their VO and gain access to the shared OASIS service for software management. The shared OASIS service is especially appropropriate for VOs that have a relatively small number of members and a relatively small amount of software to distribute. Larger VOs should consider hosting their own separate repositories . Note For information on how to configure an OASIS client see the CVMFS installation documentation .","title":"Updating Software in OASIS"},{"location":"data/update-oasis/#requirements","text":"To begin the process to distribute software on OASIS using the service, you must: Register as an OSG contact and upload your SSH Key . Submit a request to help@osg-htc.org to become an OASIS manager with the following: The names of the VO(s) whose software that you would like to manage with the shared OASIS login host The names of any other VO members that should be OASIS managers The name of a member of the VO(s) that can verify your affiliation, and Cc that person on your emailed request","title":"Requirements"},{"location":"data/update-oasis/#how-to-use-oasis","text":"","title":"How to use OASIS"},{"location":"data/update-oasis/#log-in-with-ssh","text":"The shared OASIS login server is accessible via SSH for all OASIS managers with registered SSH keys: user@host $ ssh -i ouser.@oasis-login.opensciencegrid.org Change for the name of the Virtual Organization you are trying to access and with the path to the private part of the SSH key whose public part you registered with the OSG . Instead of putting -i or ouser.@ on the command line, you can put it in your ~/.ssh/config : Host oasis-login.opensciencegrid.org User ouser. IdentityFile ","title":"Log in with SSH"},{"location":"data/update-oasis/#install-and-update-software","text":"Once you log in, you can add/modify/remove content on a staging area at /stage/oasis/$VO where $VO is the name of the VO represented by the manager. Files here are visible to both oasis-login and the Stratum 0 server (oasis.opensciencegrid.org). There is a symbolic link at /cvmfs/oasis.opensciencegrid.org/$VO that points to the same staging area. Request an oasis publish with this command: user@host $ osg-oasis-update This command queues a process to sync the content of OASIS with the content of /stage/oasis/$VO osg-oasis-update returns immediately, but only one update can run at a time (across all VOs); your request may be queued behind a different VO. If you encounter severe delays before the update is finished being published (more than 4 hours), please file a support ticket .","title":"Install and update software"},{"location":"data/update-oasis/#limitations-on-repository-content","text":"Although CVMFS provides a POSIX filesystem, it does not work well with all types of content. Content in OASIS is expected to adhere to the CVMFS repository content limitations so please review those guidelines carefully.","title":"Limitations on repository content"},{"location":"data/update-oasis/#testing","text":"After osg-oasis-update completes and the changes have been propagated to the CVMFS stratum 1 servers (typically between 0 and 60 minutes, but possibly longer if the servers are busy with updates of other repositories) then the changes can be visible under /cvmfs/oasis.opensciencegrid.org on a computer that has the CVMFS client installed . A client normally only checks for updates if at least an hour has passed since it last checked, but people who have superuser access on the client machine can force it to check again with root@host # cvmfs_talk -i oasis.opensciencegrid.org remount This can be done while the filesystem is mounted (despite the name, it does not do an OS-level umount/mount of the filesystem). If the filesystem is not mounted, it will automatically check for new updates the next time it is mounted. In order to find out if an update has reached the CVMFS stratum 1 server, you can find out the latest osg-oasis-update time seen by the stratum 1 most favored by your CVMFS client with the following long command on your client machine: user@host $ date -d \"1970-1-1 GMT + $( wget -qO- $( attr -qg host /cvmfs/oasis.opensciencegrid.org ) /.cvmfspublished | \\ cat -v | sed -n '/^T/{s/^T//p;q;}' ) sec\"","title":"Testing"},{"location":"data/update-oasis/#references","text":"CVMFS Documentation","title":"References"},{"location":"data/stashcache/install-cache/","text":"Installing the OSDF Cache \u00b6 This document describes how to install an Open Science Data Federation (OSDF) cache service. This service allows a site or regional network to cache data frequently used on the OSG, reducing data transfer over the wide-area network and decreasing access latency. Minimum version for this documentation This document describes features introduced in XCache 3.3.0, released on 2022-12-08. When installing, ensure that your version of the stash-cache RPM is at least 3.3.0. Note The OSDF cache was previously named \"Stash Cache\" and some documentation and software may use the old name. Before Starting \u00b6 Before starting the installation process, consider the following requirements: Operating system: Ensure the host has a supported operating system User IDs: If they do not exist already, the installation will create the Linux user IDs condor and xrootd Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request and install host certificates. Network ports: Your host may run a public cache instance (for serving public data only), an authenticated cache instance (for serving protected data), or both. A public cache instance requires the following ports open: Inbound TCP port 1094 for file access via the XRootD protocol Inbound TCP port 8000 for file access via HTTP(S) Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring An authenticated cache instance requires the following ports open: Inbound TCP port 8443 for authenticated file access via HTTPS Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring Hardware requirements: We recommend that a cache has at least 10Gbps connectivity, 1TB of disk space for the cache directory, and 12GB of RAM. As with all OSG software installations, there are some one-time steps to prepare in advance: Obtain root access to the host Prepare the required Yum repositories Install CA certificates Registering the Cache \u00b6 To be part of the OSDF, your cache must be registered with the OSG. You will need basic information like the resource name, hostname, host certificate DN, and the administrative and security contacts. Initial registration \u00b6 To register your cache host, follow the general registration instructions here . The service type is XRootD cache server . Info This step must be completed before installation. In your registration, you must specify which VOs your cache will serve by adding an AllowedVOs list, with each line specifying a VO whose data you are willing to cache. There are special values you may use in AllowedVOs : ANY_PUBLIC indicates that the cache is willing to serve public data from any VO. ANY indicates that the cache is willing to serve data from any VO, both public and protected. ANY implies ANY_PUBLIC . There are extra requirements for serving protected data: In addition to the cache allowing a VO in the AllowedVOs list, that VO must also allow the cache in its AllowedCaches list. See the page on getting your VO's data into OSDF . There must be an authenticated XRootD instance on the cache server. There must be a DN attribute in the resource registration with the subject DN of the host certificate This is an example registration for a cache server that serves all public data: MY_OSDF_CACHE : FQDN : my-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - ANY_PUBLIC This is an example registration for a cache server that only serves protected data for the Open Science Pool: MY_AUTH_OSDF_CACHE : FQDN : my-auth-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-auth-cache.example.net This is an example registration for a cache server that serves all public data and protected data from the OSG VO: MY_COMBO_OSDF_CACHE : FQDN : my-combo-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG - ANY_PUBLIC DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-combo-cache.example.net Non-standard ports \u00b6 By default, an unauthenticated cache instance serves public data on port 8000, and an authenticated cache instance serves protected data on port 8443. If you change the ports for your cache instances, you must specify the new endpoints under the service, as follows: MY_COMBO_OSDF_CACHE2 : FQDN : my-combo-cache2.example.net Services : XRootD cache server : Description : OSDF cache server Details : endpoint_override : my-combo-cache2.example.net:8080 auth_endpoint_override : my-combo-cache2.example.net:8444 Finalizing registration \u00b6 Once initial registration is complete, you may start the installation process. In the meantime, open a help ticket with your cache name. Mention in your ticket that you would like to \"Finalize the cache registration.\" Installing the Cache \u00b6 The OSDF software consists of an XRootD server with special configuration and supporting services. To simplify installation, OSG provides convenience RPMs that install all required packages with a single command: root@host # yum install stash-cache Configuring the Cache \u00b6 First, you must create a \"cache directory\", which will be used to store downloaded files. By default this is /mnt/stash . We recommend using a separate file system for the cache directory, with at least 1 TB of storage available. Note The cache directory must be writable by the xrootd:xrootd user and group. The stash-cache package provides default configuration files in /etc/xrootd/xrootd-stash-cache.cfg and /etc/xrootd/config.d/ . Administrators may provide additional configuration by placing files in /etc/xrootd/config.d/1*.cfg (for files that need to be processed BEFORE the OSG configuration) or /etc/xrootd/config.d/9*.cfg (for files that need to be processed AFTER the OSG configuration). You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg . The mandatory variables to configure are: set rootdir = /mnt/stash : the mounted filesystem path to export. This document refers to this as /mnt/stash . set resourcename = YOUR_RESOURCE_NAME : the resource name registered with the OSG. Ensure the xrootd service has a certificate \u00b6 The service will need a certificate for reporting and to authenticate to origins. The easiest solution for this is to use your host certificate and key as follows: Copy the host certificate to /etc/grid-security/xrd/xrd{cert,key}.pem Set the owner of the directory and contents /etc/grid-security/xrd/ to xrootd:xrootd : root@host # chown -R xrootd:xrootd /etc/grid-security/xrd/ Note You must repeat the above steps whenever you renew your host certificate. If you automate certificate renewal, you should automate copying as well. In addition, you will need to restart the XRootD services ( xrootd@stash-cache and/or xrootd@stash-cache-auth ) so they load the updated certificates. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented on the Certbot site . Configuring Optional Features \u00b6 Adjust disk utilization \u00b6 To adjust the disk utilization of your cache, create or edit a file named /etc/xrootd/config.d/90-local.cfg and set the values of pfc.diskusage . pfc.diskusage 0.90 0.95 The two values correspond to the low and high usage water marks, respectively. When usage goes above the high water mark, the XRootD service will delete cached files until usage goes below the low water mark. Enable remote debugging \u00b6 XRootD provides remote debugging via a read-only file system named digFS. This feature is disabled by default, but you may enable it if you need help troubleshooting your server. Warning Remote debugging should only be enabled for long as it is needed to troubleshoot your server. To enable remote debugging, edit /etc/xrootd/digauth.cfg and specify the authorizations for reading digFS. An example of authorizations: all allow gsi g=/glow h=*.cs.wisc.edu This gives access to the config file, log files, core files, and process information to anyone from *.cs.wisc.edu in the /glow VOMS group. See the XRootD manual for the full syntax. Remote debugging should only be enabled for as long as you need assistance. As soon as your issue has been resolved, revert any changes you have made to /etc/xrootd/digauth.cfg . Enable HTTPS on the unauthenticated cache \u00b6 By default, the unauthenticated cache instance uses plain HTTP, not HTTPS. To use HTTPS: Add a certificate according to the instructions above Uncomment set EnableVoms = 1 in /etc/xrootd/config.d/10-osg-xrdvoms.cfg Upgrading from OSG 3.5 If upgrading from OSG 3.5, you may have a file with the following contents in /etc/xrootd/config.d : # Support HTTPS access to unauthenticated cache if named stash-cache http.cadir /etc/grid-security/certificates http.cert /etc/grid-security/xrd/xrdcert.pem http.key /etc/grid-security/xrd/xrdkey.pem http.secxtractor /usr/lib64/libXrdLcmaps.so fi You must delete this config block or XRootD will fail to start. Manually Setting the FQDN (optional) \u00b6 The FQDN of the cache server that you registered in Topology may be different than its internal hostname (as reported by hostname -f ). For example, this may be the case if your cache is behind a load balancer such as LVS. In this case, you must manually tell the cache services which FQDN to use for topology lookups. Create the file /etc/systemd/system/stash-authfile@.service.d/override.conf (note the @ in the directory name) with the following contents: [Service] Environment = CACHE_FQDN= Run systemctl daemon-reload after modifying the file. Adding to Authorization Files (Optional) \u00b6 The stash-authfile services on the cache generate files that configure authorization for XRootD. Put local additions to this configuration into separate files, according to this table: Purpose Generated file Local additions file VOMS/SSL/X.509 auth config for unauthenticated cache instance /run/stash-cache/Authfile /etc/xrootd/stash-cache-Authfile.local VOMS/SSL/X.509 auth config for authenticated cache instance /run/stash-cache-auth/Authfile /etc/xrootd/stash-cache-auth-Authfile.local SciTokens config for authenticated cache instance /run/stash-cache-auth/scitokens.conf /etc/xrootd/stash-cache-auth-scitokens.conf.local Note Use of these local additions files require XCache 3.5.0 and newer. Managing OSDF services \u00b6 These services must be managed by systemctl and may start additional services as dependencies. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable Public cache services \u00b6 Software Service name Notes XRootD xrootd@stash-cache.service The XRootD daemon, which performs the data transfers XCache xcache-reporter.timer Reports usage information to collector.opensciencegrid.org Fetch CRL EL8: fetch-crl.timer EL7: fetch-crl-boot and fetch-crl-cron Required to authenticate monitoring services. See CA documentation for more info stash-authfile@stash-cache.service Generate authentication configuration files for XRootD (public cache instance) stash-authfile@stash-cache.timer Periodically run the above service (public cache instance) Authenticated cache services \u00b6 Software Service name Notes XRootD xrootd-renew-proxy.service Renew a proxy for authenticated downloads to the cache xrootd@stash-cache-auth.service The xrootd daemon which performs authenticated data transfers xrootd-renew-proxy.timer Trigger daily proxy renewal stash-authfile@stash-cache-auth.service Generate the authentication configuration files for XRootD (authenticated cache instance) stash-authfile@stash-cache-auth.timer Periodically run the above service (authenticated cache instance) Validating the Cache \u00b6 The cache server functions as a normal HTTP server and can interact with typical HTTP clients, such as curl . user@host $ curl -O http://cache_host:8000/ospool/uc-shared/public/OSG-Staff/validation/test.txt curl may not correctly report a failure, so verify that the contents of the file are: hello world! Test cache server reporting to the central collector \u00b6 To verify the cache is reporting to the central collector, run the following command from the cache server: user@host $ condor_status -any -pool collector.opensciencegrid.org:9619 \\ -l -const \"Name==\\\"xrootd@`hostname`\\\"\" The output of the above command should detail what the collector knows about the status of your cache. Here is an example snippet of the output: AuthenticatedIdentity = \"sc-cache.chtc.wisc.edu@daemon.opensciencegrid.org\" AuthenticationMethod = \"GSI\" free_cache_bytes = 868104454144 free_cache_fraction = 0.8022261674321525 LastHeardFrom = 1552002482 most_recent_access_time = 1551997049 MyType = \"Machine\" Name = \"xrootd@sc-cache.chtc.wisc.edu\" ping_elapsed_time = 0.00763392448425293 ping_response_code = 0 ping_response_message = \"[SUCCESS] \" ping_response_status = \"ok\" STASHCACHE_DaemonVersion = \"1.0.0\" ... Updating to OSG 3.6 \u00b6 The OSG 3.5 series reached end-of-life on May 1, 2022. Admins are strongly encouraged to move their caches to OSG 3.6. See general update instructions . Unauthenticated caches ( xrootd@stash-cache service) do not need any configuration changes, unless HTTPS access has been enabled. See the \"enable HTTPS on the unauthenticated cache\" section ) for the necessary configuration changes. Authenticated caches ( xrootd@stash-cache-auth service) may need the configuration changes described in the updating to OSG 3.6 section of the XRootD authorization configuration document. Getting Help \u00b6 To get assistance, please use the this page .","title":"Install from RPM"},{"location":"data/stashcache/install-cache/#installing-the-osdf-cache","text":"This document describes how to install an Open Science Data Federation (OSDF) cache service. This service allows a site or regional network to cache data frequently used on the OSG, reducing data transfer over the wide-area network and decreasing access latency. Minimum version for this documentation This document describes features introduced in XCache 3.3.0, released on 2022-12-08. When installing, ensure that your version of the stash-cache RPM is at least 3.3.0. Note The OSDF cache was previously named \"Stash Cache\" and some documentation and software may use the old name.","title":"Installing the OSDF Cache"},{"location":"data/stashcache/install-cache/#before-starting","text":"Before starting the installation process, consider the following requirements: Operating system: Ensure the host has a supported operating system User IDs: If they do not exist already, the installation will create the Linux user IDs condor and xrootd Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request and install host certificates. Network ports: Your host may run a public cache instance (for serving public data only), an authenticated cache instance (for serving protected data), or both. A public cache instance requires the following ports open: Inbound TCP port 1094 for file access via the XRootD protocol Inbound TCP port 8000 for file access via HTTP(S) Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring An authenticated cache instance requires the following ports open: Inbound TCP port 8443 for authenticated file access via HTTPS Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring Hardware requirements: We recommend that a cache has at least 10Gbps connectivity, 1TB of disk space for the cache directory, and 12GB of RAM. As with all OSG software installations, there are some one-time steps to prepare in advance: Obtain root access to the host Prepare the required Yum repositories Install CA certificates","title":"Before Starting"},{"location":"data/stashcache/install-cache/#registering-the-cache","text":"To be part of the OSDF, your cache must be registered with the OSG. You will need basic information like the resource name, hostname, host certificate DN, and the administrative and security contacts.","title":"Registering the Cache"},{"location":"data/stashcache/install-cache/#initial-registration","text":"To register your cache host, follow the general registration instructions here . The service type is XRootD cache server . Info This step must be completed before installation. In your registration, you must specify which VOs your cache will serve by adding an AllowedVOs list, with each line specifying a VO whose data you are willing to cache. There are special values you may use in AllowedVOs : ANY_PUBLIC indicates that the cache is willing to serve public data from any VO. ANY indicates that the cache is willing to serve data from any VO, both public and protected. ANY implies ANY_PUBLIC . There are extra requirements for serving protected data: In addition to the cache allowing a VO in the AllowedVOs list, that VO must also allow the cache in its AllowedCaches list. See the page on getting your VO's data into OSDF . There must be an authenticated XRootD instance on the cache server. There must be a DN attribute in the resource registration with the subject DN of the host certificate This is an example registration for a cache server that serves all public data: MY_OSDF_CACHE : FQDN : my-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - ANY_PUBLIC This is an example registration for a cache server that only serves protected data for the Open Science Pool: MY_AUTH_OSDF_CACHE : FQDN : my-auth-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-auth-cache.example.net This is an example registration for a cache server that serves all public data and protected data from the OSG VO: MY_COMBO_OSDF_CACHE : FQDN : my-combo-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG - ANY_PUBLIC DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-combo-cache.example.net","title":"Initial registration"},{"location":"data/stashcache/install-cache/#non-standard-ports","text":"By default, an unauthenticated cache instance serves public data on port 8000, and an authenticated cache instance serves protected data on port 8443. If you change the ports for your cache instances, you must specify the new endpoints under the service, as follows: MY_COMBO_OSDF_CACHE2 : FQDN : my-combo-cache2.example.net Services : XRootD cache server : Description : OSDF cache server Details : endpoint_override : my-combo-cache2.example.net:8080 auth_endpoint_override : my-combo-cache2.example.net:8444","title":"Non-standard ports"},{"location":"data/stashcache/install-cache/#finalizing-registration","text":"Once initial registration is complete, you may start the installation process. In the meantime, open a help ticket with your cache name. Mention in your ticket that you would like to \"Finalize the cache registration.\"","title":"Finalizing registration"},{"location":"data/stashcache/install-cache/#installing-the-cache","text":"The OSDF software consists of an XRootD server with special configuration and supporting services. To simplify installation, OSG provides convenience RPMs that install all required packages with a single command: root@host # yum install stash-cache","title":"Installing the Cache"},{"location":"data/stashcache/install-cache/#configuring-the-cache","text":"First, you must create a \"cache directory\", which will be used to store downloaded files. By default this is /mnt/stash . We recommend using a separate file system for the cache directory, with at least 1 TB of storage available. Note The cache directory must be writable by the xrootd:xrootd user and group. The stash-cache package provides default configuration files in /etc/xrootd/xrootd-stash-cache.cfg and /etc/xrootd/config.d/ . Administrators may provide additional configuration by placing files in /etc/xrootd/config.d/1*.cfg (for files that need to be processed BEFORE the OSG configuration) or /etc/xrootd/config.d/9*.cfg (for files that need to be processed AFTER the OSG configuration). You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg . The mandatory variables to configure are: set rootdir = /mnt/stash : the mounted filesystem path to export. This document refers to this as /mnt/stash . set resourcename = YOUR_RESOURCE_NAME : the resource name registered with the OSG.","title":"Configuring the Cache"},{"location":"data/stashcache/install-cache/#ensure-the-xrootd-service-has-a-certificate","text":"The service will need a certificate for reporting and to authenticate to origins. The easiest solution for this is to use your host certificate and key as follows: Copy the host certificate to /etc/grid-security/xrd/xrd{cert,key}.pem Set the owner of the directory and contents /etc/grid-security/xrd/ to xrootd:xrootd : root@host # chown -R xrootd:xrootd /etc/grid-security/xrd/ Note You must repeat the above steps whenever you renew your host certificate. If you automate certificate renewal, you should automate copying as well. In addition, you will need to restart the XRootD services ( xrootd@stash-cache and/or xrootd@stash-cache-auth ) so they load the updated certificates. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented on the Certbot site .","title":"Ensure the xrootd service has a certificate"},{"location":"data/stashcache/install-cache/#configuring-optional-features","text":"","title":"Configuring Optional Features"},{"location":"data/stashcache/install-cache/#adjust-disk-utilization","text":"To adjust the disk utilization of your cache, create or edit a file named /etc/xrootd/config.d/90-local.cfg and set the values of pfc.diskusage . pfc.diskusage 0.90 0.95 The two values correspond to the low and high usage water marks, respectively. When usage goes above the high water mark, the XRootD service will delete cached files until usage goes below the low water mark.","title":"Adjust disk utilization"},{"location":"data/stashcache/install-cache/#enable-remote-debugging","text":"XRootD provides remote debugging via a read-only file system named digFS. This feature is disabled by default, but you may enable it if you need help troubleshooting your server. Warning Remote debugging should only be enabled for long as it is needed to troubleshoot your server. To enable remote debugging, edit /etc/xrootd/digauth.cfg and specify the authorizations for reading digFS. An example of authorizations: all allow gsi g=/glow h=*.cs.wisc.edu This gives access to the config file, log files, core files, and process information to anyone from *.cs.wisc.edu in the /glow VOMS group. See the XRootD manual for the full syntax. Remote debugging should only be enabled for as long as you need assistance. As soon as your issue has been resolved, revert any changes you have made to /etc/xrootd/digauth.cfg .","title":"Enable remote debugging"},{"location":"data/stashcache/install-cache/#enable-https-on-the-unauthenticated-cache","text":"By default, the unauthenticated cache instance uses plain HTTP, not HTTPS. To use HTTPS: Add a certificate according to the instructions above Uncomment set EnableVoms = 1 in /etc/xrootd/config.d/10-osg-xrdvoms.cfg Upgrading from OSG 3.5 If upgrading from OSG 3.5, you may have a file with the following contents in /etc/xrootd/config.d : # Support HTTPS access to unauthenticated cache if named stash-cache http.cadir /etc/grid-security/certificates http.cert /etc/grid-security/xrd/xrdcert.pem http.key /etc/grid-security/xrd/xrdkey.pem http.secxtractor /usr/lib64/libXrdLcmaps.so fi You must delete this config block or XRootD will fail to start.","title":"Enable HTTPS on the unauthenticated cache"},{"location":"data/stashcache/install-cache/#manually-setting-the-fqdn-optional","text":"The FQDN of the cache server that you registered in Topology may be different than its internal hostname (as reported by hostname -f ). For example, this may be the case if your cache is behind a load balancer such as LVS. In this case, you must manually tell the cache services which FQDN to use for topology lookups. Create the file /etc/systemd/system/stash-authfile@.service.d/override.conf (note the @ in the directory name) with the following contents: [Service] Environment = CACHE_FQDN= Run systemctl daemon-reload after modifying the file.","title":"Manually Setting the FQDN (optional)"},{"location":"data/stashcache/install-cache/#adding-to-authorization-files-optional","text":"The stash-authfile services on the cache generate files that configure authorization for XRootD. Put local additions to this configuration into separate files, according to this table: Purpose Generated file Local additions file VOMS/SSL/X.509 auth config for unauthenticated cache instance /run/stash-cache/Authfile /etc/xrootd/stash-cache-Authfile.local VOMS/SSL/X.509 auth config for authenticated cache instance /run/stash-cache-auth/Authfile /etc/xrootd/stash-cache-auth-Authfile.local SciTokens config for authenticated cache instance /run/stash-cache-auth/scitokens.conf /etc/xrootd/stash-cache-auth-scitokens.conf.local Note Use of these local additions files require XCache 3.5.0 and newer.","title":"Adding to Authorization Files (Optional)"},{"location":"data/stashcache/install-cache/#managing-osdf-services","text":"These services must be managed by systemctl and may start additional services as dependencies. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable ","title":"Managing OSDF services"},{"location":"data/stashcache/install-cache/#public-cache-services","text":"Software Service name Notes XRootD xrootd@stash-cache.service The XRootD daemon, which performs the data transfers XCache xcache-reporter.timer Reports usage information to collector.opensciencegrid.org Fetch CRL EL8: fetch-crl.timer EL7: fetch-crl-boot and fetch-crl-cron Required to authenticate monitoring services. See CA documentation for more info stash-authfile@stash-cache.service Generate authentication configuration files for XRootD (public cache instance) stash-authfile@stash-cache.timer Periodically run the above service (public cache instance)","title":"Public cache services"},{"location":"data/stashcache/install-cache/#authenticated-cache-services","text":"Software Service name Notes XRootD xrootd-renew-proxy.service Renew a proxy for authenticated downloads to the cache xrootd@stash-cache-auth.service The xrootd daemon which performs authenticated data transfers xrootd-renew-proxy.timer Trigger daily proxy renewal stash-authfile@stash-cache-auth.service Generate the authentication configuration files for XRootD (authenticated cache instance) stash-authfile@stash-cache-auth.timer Periodically run the above service (authenticated cache instance)","title":"Authenticated cache services"},{"location":"data/stashcache/install-cache/#validating-the-cache","text":"The cache server functions as a normal HTTP server and can interact with typical HTTP clients, such as curl . user@host $ curl -O http://cache_host:8000/ospool/uc-shared/public/OSG-Staff/validation/test.txt curl may not correctly report a failure, so verify that the contents of the file are: hello world!","title":"Validating the Cache"},{"location":"data/stashcache/install-cache/#test-cache-server-reporting-to-the-central-collector","text":"To verify the cache is reporting to the central collector, run the following command from the cache server: user@host $ condor_status -any -pool collector.opensciencegrid.org:9619 \\ -l -const \"Name==\\\"xrootd@`hostname`\\\"\" The output of the above command should detail what the collector knows about the status of your cache. Here is an example snippet of the output: AuthenticatedIdentity = \"sc-cache.chtc.wisc.edu@daemon.opensciencegrid.org\" AuthenticationMethod = \"GSI\" free_cache_bytes = 868104454144 free_cache_fraction = 0.8022261674321525 LastHeardFrom = 1552002482 most_recent_access_time = 1551997049 MyType = \"Machine\" Name = \"xrootd@sc-cache.chtc.wisc.edu\" ping_elapsed_time = 0.00763392448425293 ping_response_code = 0 ping_response_message = \"[SUCCESS] \" ping_response_status = \"ok\" STASHCACHE_DaemonVersion = \"1.0.0\" ...","title":"Test cache server reporting to the central collector"},{"location":"data/stashcache/install-cache/#updating-to-osg-36","text":"The OSG 3.5 series reached end-of-life on May 1, 2022. Admins are strongly encouraged to move their caches to OSG 3.6. See general update instructions . Unauthenticated caches ( xrootd@stash-cache service) do not need any configuration changes, unless HTTPS access has been enabled. See the \"enable HTTPS on the unauthenticated cache\" section ) for the necessary configuration changes. Authenticated caches ( xrootd@stash-cache-auth service) may need the configuration changes described in the updating to OSG 3.6 section of the XRootD authorization configuration document.","title":"Updating to OSG 3.6"},{"location":"data/stashcache/install-cache/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"data/stashcache/install-origin/","text":"Installing the OSDF Origin \u00b6 This document describes how to install an Open Science Data Federation (OSDF) origin service. This service allows an organization to export its data to the data federation. Minimum version for this documentation This document describes features introduced in XCache 3.3.0, released on 2022-12-08. When installing, ensure that your version of the stash-origin RPM is at least 3.3.0. Note The OSDF Origin was previously named \"Stash Origin\" and some documentation and software may use the old name. Note The origin must be registered with the OSG prior to joining the data federation. You may start the registration process prior to finishing the installation by using this link along with information like: Resource name and hostname VO associated with this origin server (which will be used to determine the origin's namespace prefix) Administrative and security contact(s) Who (or what) will be allowed to access the VO's data Which caches will be allowed to cache the VO data Before Starting \u00b6 Before starting the installation process, consider the following requirements: Operating system: A RHEL 7 or RHEL 8 or compatible operating systems. User IDs: If they do not exist already, the installation will create the Linux user IDs condor and xrootd ; only the xrootd user is utilized for the running daemons. Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request and install host certificates. Network ports: The origin service requires the following ports open: Inbound TCP port 1094 for unauthenticated file access via the XRoot or HTTP protocols (if serving public data) Inbound TCP port 1095 for authenticated file access via the XRoot or HTTPS protocols (if serving authenticated data) Outbound TCP port 1213 to redirector.osgstorage.org for connecting to the data federation Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring. Hardware requirements: We recommend that an origin has at least 1Gbps connectivity and 8GB of RAM. We suggest that several gigabytes of local disk space be available for log files, although some logging verbosity can be reduced. As with all OSG software installations, there are some one-time steps to prepare in advance: Obtain root access to the host Prepare the required Yum repositories Install CA certificates Installing the Origin \u00b6 The origin service consists of one or more XRootD daemons and their dependencies for the authentication infrastructure. To simplify installation, OSG provides convenience RPMs that install all required software with a single command: root@host # yum install stash-origin For this installation guide, we assume that the data to be exported to the federation is mounted at /mnt/stash and owned by the xrootd:xrootd user. Configuring the Origin Server \u00b6 The stash-origin package provides a default configuration files in /etc/xrootd/xrootd-stash-origin.cfg and /etc/xrootd/config.d . Administrators may provide additional configuration by placing files in /etc/xrootd/config.d of the form /etc/xrootd/config.d/1*.cfg (for directives that need to be processed BEFORE the OSG configuration) or /etc/xrootd/config.d/9*.cfg (for directives that are processed AFTER the OSG configuration). You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg and /etc/xrootd/config.d/10-origin-site-local.cfg . The mandatory variables to configure are: File Config line Description 10-common-site-local.cfg set rootdir = /mnt/stash The mounted filesystem path to export; this document calls it /mnt/stash 10-common-site-local.cfg set resourcename = YOUR_RESOURCE_NAME The resource name registered with OSG 10-origin-site-local.cfg set PublicOriginExport = /VO/PUBLIC The directory relative to rootdir that is the top of the exported namespace for public (unauthenticated) origin services 10-origin-site-local.cfg set AuthOriginExport = /VO/PUBLIC The directory relative to rootdir that is the top of the exported namespace for authenticated origin services For example, if the HCC VO would like to set up an origin server exporting from the mount point /mnt/stash , and HCC has a public registered namespace at /hcc/PUBLIC , then the following would be set in 10-common-site-local.cfg : set rootdir = /mnt/stash set resourcename = HCC_OSDF_ORIGIN And the following would be set in 10-origin-site-local.cfg : set PublicOriginExport = /hcc/PUBLIC With this configuration, the data under /mnt/stash/hcc/PUBLIC/bio/datasets would be available under the path /hcc/PUBLIC/bio/datasets in the OSDF namespace and the data under /mnt/stash/hcc/PUBLIC/hep/generators would be available under the path /hcc/PUBLIC/hep/generators in the OSDF namespace. If the HCC has a protected registered namespace at /hcc/PROTECTED then set the following in 10-origin-site-local.cfg : set AuthOriginExport = /hcc/PROTECTED If you are serving public data from the origin, you must set PublicOriginExport and use the xrootd@stash-origin service. If you are serving protected data from the origin, you must set AuthOriginExport and use the xrootd@stash-origin-auth service (if not using xrootd-multiuser ) or xrootd-privileged@stash-origin-auth service (if using xrootd-multiuser ). Warning The OSDF namespace is a global namespace. Directories you export must not collide with directories provided by other origin servers; this is why the explicit registration is required. Manually Setting the FQDN (optional) \u00b6 The FQDN of the origin server that you registered in Topology may be different than its internal hostname (as reported by hostname -f ). For example, this may be the case if your origin is behind a load balancer such as LVS. In this case, you must manually tell the origin services which FQDN to use for topology lookups. Create the file /etc/systemd/system/stash-authfile@.service.d/override.conf with the following contents: [Service] Environment = ORIGIN_FQDN= Run systemctl daemon-reload after modifying the file. Managing the Origin Services \u00b6 Serving data for an origin is done by the xrootd daemon. There can be multiple instances of xrootd , running on different ports. The instance that serves unauthenticated data will run on port 1094. The instance that serves authenticated data will run on port 1095. If your origin serves both authenticated and unauthenticated data, you will run both instances. Use of multiuser plugin Some of the service names are different if you have configured the XRootD Multiuser plugin : xrootd-privileged is used instead of xrootd cmsd-privileged is used instead of cmsd The privileged and non-privileged services are mutually exclusive. The origin services consist of the following SystemD units that you must directly manage: Service name Notes xrootd@stash-origin.service Performs data transfers (unauthenticated instance) xrootd@stash-origin-auth.service Performs data transfers (authenticated instance without multiuser ) xrootd-privileged@stash-origin-auth.service Performs data transfers (authenticated instance with multiuser ) These services must be managed with systemctl and may start additional services as dependencies. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable In addition, the origin service automatically uses the following SystemD units: Service name Notes cmsd@stash-origin.service Integrates the origin into the data federation (unauthenticated instance) cmsd@stash-origin-auth.service Integrates the origin into the data federation (authenticated instance without multiuser ) cmsd-privileged@stash-origin-auth.service Integrates the origin into the data federation (authenticated instance with multiuser ) stash-authfile@stash-origin.timer Updates the authorization files periodically (unauthenticated instance) stash-authfile@stash-origin-auth.timer Updates the authorization files periodically (authenticated instance) Adding to Authorization Files (Optional) \u00b6 The stash-authfile services on the origin generate files that configure authorization for XRootD. Put local additions to this configuration into separate files, according to this table: Purpose Generated file Local additions file VOMS/SSL/X.509 auth config for unauthenticated origin instance /run/stash-origin/Authfile /etc/xrootd/stash-origin-Authfile.local VOMS/SSL/X.509 auth config for authenticated origin instance /run/stash-origin-auth/Authfile /etc/xrootd/stash-origin-auth-Authfile.local SciTokens config for authenticated origin instance /run/stash-origin-auth/scitokens.conf /etc/xrootd/stash-origin-auth-scitokens.conf.local Note Use of these local additions files requires XCache 3.5.0 or newer. Verifying the Origin Server \u00b6 Once your server has been registered with the OSG and started, perform the following steps to verify that it is functional. Testing availability \u00b6 To verify that your origin is correctly advertising its availability, run the following command from the origin server: [user@server ~]$ xrdmapc -r --list s redirector.osgstorage.org:1094 0**** redirector.osgstorage.org:1094 Srv ceph-gridftp1.grid.uchicago.edu:1094 Srv stashcache.fnal.gov:1094 Srv stash.osgconnect.net:1094 Srv origin.ligo.caltech.edu:1094 Srv csiu.grid.iu.edu:1094 The output should list the hostname of your origin server. Testing directory export \u00b6 To verify that the directories you are exporting are visible from the redirector, run the following command from the origin server: [user@server ~]$ xrdmapc -r --verify --list s redirector.osgstorage.org:1094 0*rv* redirector.osgstorage.org:1094 >+ Srv ceph-gridftp1.grid.uchicago.edu:1094 ? Srv stashcache.fnal.gov:1094 [not authorized] >+ Srv stash.osgconnect.net:1094 - Srv origin.ligo.caltech.edu:1094 ? Srv csiu.grid.iu.edu:1094 [connect error] Change for the directory the service is suppose to export. Your server should be marked with a >+ to indicate that it contains the given path and the path was accessible. Testing file access (unauthenticated origin) \u00b6 To verify that you can download a file from the origin server, use the stashcp tool, which is available in the stashcp RPM. Place a in , where can be any file in a publicly accessible path. Run the following command: [user@host]$ stashcp /tmp/testfile If successful, there should be a file at /tmp/testfile with the contents of the test file on your origin server. If unsuccessful, you can pass the -d flag to stashcp for debug info. You can also test directly downloading from the origin via xrdcp , which is available in the xrootd-client RPM. Run the following command: [user@host]$ xrdcp xroot://:1094/ /tmp/testfile Testing file access (authenticated origin) \u00b6 In order to download files from the origin, caches must be able to access the origin via SSL certificates. To test SSL authentication, use the curl command. Place a in , where can be any file in a protected location. As root on your origin, run the following command: [root@host]# curl --cert /etc/grid-security/hostcert.pem \\ --key /etc/grid-security/hostkey.pem \\ https://:1095/ \\ -o /tmp/testfile If successful, there should be a file at /tmp/testfile with the contents of the test file on your origin server. Note This test requires including the DN of your origin in your origin's OSG Topology registration . To verify that a user can download a file from the origin server, use the stashcp tool, which is available in the stashcp RPM. Obtain a credential (a SciToken or WLCG Token, depending on your origin's configuration). Place a in , where can be any file in a path you expect to be accessible using the credential you just obtained. Run the following command: [user@host]$ stashcp /tmp/testfile If successful, there should be a file at /tmp/testfile with the contents of the test file on your origin server. If unsuccessful, you can pass the -d flag to stashcp for debug info. Registering the Origin \u00b6 To be part of the Open Science Data Federation, your origin must be registered with the OSG . The service type is XRootD origin server . The resource must also specify which VOs it will serve data from. To do this, add an AllowedVOs list, with each line specifying a VO whose data the resource is willing to host. For example: MY_OSDF_ORIGIN : Services : XRootD origin server : Description : OSDF origin server AllowedVOs : - GLOW - OSG DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-osdf-origin.example.net You can use the special value ANY to indicate that the origin will serve data from any VO that puts data on it. In addition to the origin allowing a VOs via the AllowedVOs list, that VO must also allow the origin in one of its AllowedOrigins lists in DataFederation/StashCache/Namespaces . See the page on getting your VO's data into OSDF . Specifying the DN of your origin is not required but it is useful for testing. Updating to OSG 3.6 \u00b6 The OSG 3.5 series reached end-of-life on May 1, 2022. Admins are strongly encouraged to move their origins to OSG 3.6. See general update instructions . Unauthenticated origins ( xrootd@stash-origin service) do not need any configuration changes. Authenticated origins ( xrootd@stash-origin-auth service) may need the configuration changes described in the updating to OSG 3.6 section of the XRootD authorization configuration document. Getting Help \u00b6 To get assistance, please use the this page .","title":"Install from RPM"},{"location":"data/stashcache/install-origin/#installing-the-osdf-origin","text":"This document describes how to install an Open Science Data Federation (OSDF) origin service. This service allows an organization to export its data to the data federation. Minimum version for this documentation This document describes features introduced in XCache 3.3.0, released on 2022-12-08. When installing, ensure that your version of the stash-origin RPM is at least 3.3.0. Note The OSDF Origin was previously named \"Stash Origin\" and some documentation and software may use the old name. Note The origin must be registered with the OSG prior to joining the data federation. You may start the registration process prior to finishing the installation by using this link along with information like: Resource name and hostname VO associated with this origin server (which will be used to determine the origin's namespace prefix) Administrative and security contact(s) Who (or what) will be allowed to access the VO's data Which caches will be allowed to cache the VO data","title":"Installing the OSDF Origin"},{"location":"data/stashcache/install-origin/#before-starting","text":"Before starting the installation process, consider the following requirements: Operating system: A RHEL 7 or RHEL 8 or compatible operating systems. User IDs: If they do not exist already, the installation will create the Linux user IDs condor and xrootd ; only the xrootd user is utilized for the running daemons. Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request and install host certificates. Network ports: The origin service requires the following ports open: Inbound TCP port 1094 for unauthenticated file access via the XRoot or HTTP protocols (if serving public data) Inbound TCP port 1095 for authenticated file access via the XRoot or HTTPS protocols (if serving authenticated data) Outbound TCP port 1213 to redirector.osgstorage.org for connecting to the data federation Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring. Hardware requirements: We recommend that an origin has at least 1Gbps connectivity and 8GB of RAM. We suggest that several gigabytes of local disk space be available for log files, although some logging verbosity can be reduced. As with all OSG software installations, there are some one-time steps to prepare in advance: Obtain root access to the host Prepare the required Yum repositories Install CA certificates","title":"Before Starting"},{"location":"data/stashcache/install-origin/#installing-the-origin","text":"The origin service consists of one or more XRootD daemons and their dependencies for the authentication infrastructure. To simplify installation, OSG provides convenience RPMs that install all required software with a single command: root@host # yum install stash-origin For this installation guide, we assume that the data to be exported to the federation is mounted at /mnt/stash and owned by the xrootd:xrootd user.","title":"Installing the Origin"},{"location":"data/stashcache/install-origin/#configuring-the-origin-server","text":"The stash-origin package provides a default configuration files in /etc/xrootd/xrootd-stash-origin.cfg and /etc/xrootd/config.d . Administrators may provide additional configuration by placing files in /etc/xrootd/config.d of the form /etc/xrootd/config.d/1*.cfg (for directives that need to be processed BEFORE the OSG configuration) or /etc/xrootd/config.d/9*.cfg (for directives that are processed AFTER the OSG configuration). You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg and /etc/xrootd/config.d/10-origin-site-local.cfg . The mandatory variables to configure are: File Config line Description 10-common-site-local.cfg set rootdir = /mnt/stash The mounted filesystem path to export; this document calls it /mnt/stash 10-common-site-local.cfg set resourcename = YOUR_RESOURCE_NAME The resource name registered with OSG 10-origin-site-local.cfg set PublicOriginExport = /VO/PUBLIC The directory relative to rootdir that is the top of the exported namespace for public (unauthenticated) origin services 10-origin-site-local.cfg set AuthOriginExport = /VO/PUBLIC The directory relative to rootdir that is the top of the exported namespace for authenticated origin services For example, if the HCC VO would like to set up an origin server exporting from the mount point /mnt/stash , and HCC has a public registered namespace at /hcc/PUBLIC , then the following would be set in 10-common-site-local.cfg : set rootdir = /mnt/stash set resourcename = HCC_OSDF_ORIGIN And the following would be set in 10-origin-site-local.cfg : set PublicOriginExport = /hcc/PUBLIC With this configuration, the data under /mnt/stash/hcc/PUBLIC/bio/datasets would be available under the path /hcc/PUBLIC/bio/datasets in the OSDF namespace and the data under /mnt/stash/hcc/PUBLIC/hep/generators would be available under the path /hcc/PUBLIC/hep/generators in the OSDF namespace. If the HCC has a protected registered namespace at /hcc/PROTECTED then set the following in 10-origin-site-local.cfg : set AuthOriginExport = /hcc/PROTECTED If you are serving public data from the origin, you must set PublicOriginExport and use the xrootd@stash-origin service. If you are serving protected data from the origin, you must set AuthOriginExport and use the xrootd@stash-origin-auth service (if not using xrootd-multiuser ) or xrootd-privileged@stash-origin-auth service (if using xrootd-multiuser ). Warning The OSDF namespace is a global namespace. Directories you export must not collide with directories provided by other origin servers; this is why the explicit registration is required.","title":"Configuring the Origin Server"},{"location":"data/stashcache/install-origin/#manually-setting-the-fqdn-optional","text":"The FQDN of the origin server that you registered in Topology may be different than its internal hostname (as reported by hostname -f ). For example, this may be the case if your origin is behind a load balancer such as LVS. In this case, you must manually tell the origin services which FQDN to use for topology lookups. Create the file /etc/systemd/system/stash-authfile@.service.d/override.conf with the following contents: [Service] Environment = ORIGIN_FQDN= Run systemctl daemon-reload after modifying the file.","title":"Manually Setting the FQDN (optional)"},{"location":"data/stashcache/install-origin/#managing-the-origin-services","text":"Serving data for an origin is done by the xrootd daemon. There can be multiple instances of xrootd , running on different ports. The instance that serves unauthenticated data will run on port 1094. The instance that serves authenticated data will run on port 1095. If your origin serves both authenticated and unauthenticated data, you will run both instances. Use of multiuser plugin Some of the service names are different if you have configured the XRootD Multiuser plugin : xrootd-privileged is used instead of xrootd cmsd-privileged is used instead of cmsd The privileged and non-privileged services are mutually exclusive. The origin services consist of the following SystemD units that you must directly manage: Service name Notes xrootd@stash-origin.service Performs data transfers (unauthenticated instance) xrootd@stash-origin-auth.service Performs data transfers (authenticated instance without multiuser ) xrootd-privileged@stash-origin-auth.service Performs data transfers (authenticated instance with multiuser ) These services must be managed with systemctl and may start additional services as dependencies. As a reminder, here are common service commands (all run as root ): To... Run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable In addition, the origin service automatically uses the following SystemD units: Service name Notes cmsd@stash-origin.service Integrates the origin into the data federation (unauthenticated instance) cmsd@stash-origin-auth.service Integrates the origin into the data federation (authenticated instance without multiuser ) cmsd-privileged@stash-origin-auth.service Integrates the origin into the data federation (authenticated instance with multiuser ) stash-authfile@stash-origin.timer Updates the authorization files periodically (unauthenticated instance) stash-authfile@stash-origin-auth.timer Updates the authorization files periodically (authenticated instance)","title":"Managing the Origin Services"},{"location":"data/stashcache/install-origin/#adding-to-authorization-files-optional","text":"The stash-authfile services on the origin generate files that configure authorization for XRootD. Put local additions to this configuration into separate files, according to this table: Purpose Generated file Local additions file VOMS/SSL/X.509 auth config for unauthenticated origin instance /run/stash-origin/Authfile /etc/xrootd/stash-origin-Authfile.local VOMS/SSL/X.509 auth config for authenticated origin instance /run/stash-origin-auth/Authfile /etc/xrootd/stash-origin-auth-Authfile.local SciTokens config for authenticated origin instance /run/stash-origin-auth/scitokens.conf /etc/xrootd/stash-origin-auth-scitokens.conf.local Note Use of these local additions files requires XCache 3.5.0 or newer.","title":"Adding to Authorization Files (Optional)"},{"location":"data/stashcache/install-origin/#verifying-the-origin-server","text":"Once your server has been registered with the OSG and started, perform the following steps to verify that it is functional.","title":"Verifying the Origin Server"},{"location":"data/stashcache/install-origin/#testing-availability","text":"To verify that your origin is correctly advertising its availability, run the following command from the origin server: [user@server ~]$ xrdmapc -r --list s redirector.osgstorage.org:1094 0**** redirector.osgstorage.org:1094 Srv ceph-gridftp1.grid.uchicago.edu:1094 Srv stashcache.fnal.gov:1094 Srv stash.osgconnect.net:1094 Srv origin.ligo.caltech.edu:1094 Srv csiu.grid.iu.edu:1094 The output should list the hostname of your origin server.","title":"Testing availability"},{"location":"data/stashcache/install-origin/#testing-directory-export","text":"To verify that the directories you are exporting are visible from the redirector, run the following command from the origin server: [user@server ~]$ xrdmapc -r --verify --list s redirector.osgstorage.org:1094 0*rv* redirector.osgstorage.org:1094 >+ Srv ceph-gridftp1.grid.uchicago.edu:1094 ? Srv stashcache.fnal.gov:1094 [not authorized] >+ Srv stash.osgconnect.net:1094 - Srv origin.ligo.caltech.edu:1094 ? Srv csiu.grid.iu.edu:1094 [connect error] Change for the directory the service is suppose to export. Your server should be marked with a >+ to indicate that it contains the given path and the path was accessible.","title":"Testing directory export"},{"location":"data/stashcache/install-origin/#testing-file-access-unauthenticated-origin","text":"To verify that you can download a file from the origin server, use the stashcp tool, which is available in the stashcp RPM. Place a in , where can be any file in a publicly accessible path. Run the following command: [user@host]$ stashcp /tmp/testfile If successful, there should be a file at /tmp/testfile with the contents of the test file on your origin server. If unsuccessful, you can pass the -d flag to stashcp for debug info. You can also test directly downloading from the origin via xrdcp , which is available in the xrootd-client RPM. Run the following command: [user@host]$ xrdcp xroot://:1094/ /tmp/testfile","title":"Testing file access (unauthenticated origin)"},{"location":"data/stashcache/install-origin/#testing-file-access-authenticated-origin","text":"In order to download files from the origin, caches must be able to access the origin via SSL certificates. To test SSL authentication, use the curl command. Place a in , where can be any file in a protected location. As root on your origin, run the following command: [root@host]# curl --cert /etc/grid-security/hostcert.pem \\ --key /etc/grid-security/hostkey.pem \\ https://:1095/ \\ -o /tmp/testfile If successful, there should be a file at /tmp/testfile with the contents of the test file on your origin server. Note This test requires including the DN of your origin in your origin's OSG Topology registration . To verify that a user can download a file from the origin server, use the stashcp tool, which is available in the stashcp RPM. Obtain a credential (a SciToken or WLCG Token, depending on your origin's configuration). Place a in , where can be any file in a path you expect to be accessible using the credential you just obtained. Run the following command: [user@host]$ stashcp /tmp/testfile If successful, there should be a file at /tmp/testfile with the contents of the test file on your origin server. If unsuccessful, you can pass the -d flag to stashcp for debug info.","title":"Testing file access (authenticated origin)"},{"location":"data/stashcache/install-origin/#registering-the-origin","text":"To be part of the Open Science Data Federation, your origin must be registered with the OSG . The service type is XRootD origin server . The resource must also specify which VOs it will serve data from. To do this, add an AllowedVOs list, with each line specifying a VO whose data the resource is willing to host. For example: MY_OSDF_ORIGIN : Services : XRootD origin server : Description : OSDF origin server AllowedVOs : - GLOW - OSG DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-osdf-origin.example.net You can use the special value ANY to indicate that the origin will serve data from any VO that puts data on it. In addition to the origin allowing a VOs via the AllowedVOs list, that VO must also allow the origin in one of its AllowedOrigins lists in DataFederation/StashCache/Namespaces . See the page on getting your VO's data into OSDF . Specifying the DN of your origin is not required but it is useful for testing.","title":"Registering the Origin"},{"location":"data/stashcache/install-origin/#updating-to-osg-36","text":"The OSG 3.5 series reached end-of-life on May 1, 2022. Admins are strongly encouraged to move their origins to OSG 3.6. See general update instructions . Unauthenticated origins ( xrootd@stash-origin service) do not need any configuration changes. Authenticated origins ( xrootd@stash-origin-auth service) may need the configuration changes described in the updating to OSG 3.6 section of the XRootD authorization configuration document.","title":"Updating to OSG 3.6"},{"location":"data/stashcache/install-origin/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"data/stashcache/overview/","text":"Open Science Data Federation Overview \u00b6 The OSG operates the Open Science Data Federation (OSDF), which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data at each site. The map below shows the location of the current caches in the federation: Joining and Using the OSDF \u00b6 We support three types of deployments: We operate the service for you. All you need is provide us with a Kubernetes host to deploy our container into. This is our preferred way for you to join. It is conceptually described on our home website for an origin. A cache would be deployed exactly the same way. If this is how you want to join OSDF, please send email to support@osg-htc.org and we will guide you through the process. You can deploy our container yourself as described in our documentation . You can deploy from RPM as described in our documentation We strongly suggest that you allow us to operate these services for you (option 1) . The software that implements the service changes frequently enough, and is complicated enough, that keeping up with changes may require significant effort. If your installation is deemed too out-of-date, your service may be excluded from the OSDF. For more information on the OSDF , please see our overview page .","title":"Overview"},{"location":"data/stashcache/overview/#open-science-data-federation-overview","text":"The OSG operates the Open Science Data Federation (OSDF), which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data at each site. The map below shows the location of the current caches in the federation:","title":"Open Science Data Federation Overview"},{"location":"data/stashcache/overview/#joining-and-using-the-osdf","text":"We support three types of deployments: We operate the service for you. All you need is provide us with a Kubernetes host to deploy our container into. This is our preferred way for you to join. It is conceptually described on our home website for an origin. A cache would be deployed exactly the same way. If this is how you want to join OSDF, please send email to support@osg-htc.org and we will guide you through the process. You can deploy our container yourself as described in our documentation . You can deploy from RPM as described in our documentation We strongly suggest that you allow us to operate these services for you (option 1) . The software that implements the service changes frequently enough, and is complicated enough, that keeping up with changes may require significant effort. If your installation is deemed too out-of-date, your service may be excluded from the OSDF. For more information on the OSDF , please see our overview page .","title":"Joining and Using the OSDF"},{"location":"data/stashcache/run-stash-origin-container/","text":"Running OSDF Origin in a Container \u00b6 The OSG operates the Open Science Data Federation (OSDF), which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data across sites or operate their own scalable infrastructure. Origins store copies of users' data. Each community (or experiment) needs to run one origin to export its data via the federation. This document outlines how to run such an origin in a Docker container. Note The OSDF Origin was previously named \"Stash Origin\" and some documentation and software may use the old name. Before Starting \u00b6 Before starting the installation process, consider the following requirements: Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group). Network ports: The origin listens for incoming HTTP(S) and XRootD connections on ports 1094 and/or 1095. 1094 is used for serving public (unauthenticated) data, and 1095 is used for serving authenticated data. File Systems: The origin needs a host partition to store user data. Hardware requirements: We recommend that an origin has at least 1Gbps connectivity and 8GB of RAM. Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request host certificates. Registration: Before deploying an origin, you must register the service in the OSG Topology Note This document describes features introduced in XCache 3.2.2, released on 2022-09-29. You must use a version of the opensciencegrid/stash-origin image built after that date. Configuring the Origin \u00b6 In addition to the required configuration above (ports and file systems), you may also configure the behavior of your origin with the following variables using an environment variable file: Where the environment file on the docker host, /opt/origin/.env , has (at least) the following contents, replacing with the resource name of your origin as registered in Topology and with the public DNS name that should be used to contact your origin: XC_RESOURCENAME=YOUR_SITE_NAME ORIGIN_FQDN= In addition, define the following variables to specify which subpaths should be served as public (unauthenticated) data on port 1094, and which subpaths should be served as authenticated data on port 1095: XC_PUBLIC_ORIGIN_EXPORT=//PUBLIC XC_AUTH_ORIGIN_EXPORT=//PROTECTED These paths are relative to the host partition being served -- see the Populating Origin Data section below. If you only define XC_AUTH_ORIGIN_EXPORT , you will only serve data on port 1095. If you only define XC_PUBLIC_ORIGIN_EXPORT , you will only serve data on port 1094. If you do not define either, you will serve the entire host partition as public data on port 1094. Note For backward compatibility, XC_ORIGINEXPORT is accepted as an alias for XC_PUBLIC_ORIGIN_EXPORT . Providing a host certificate \u00b6 The service will need a certificate for contacting central OSDF services and for authenticating connections. Follow our host certificate documentation to obtain a host certificate and key. Then, volume-mount the host certificate to /etc/grid-security/hostcert.pem , and the key to /etc/grid-security/hostkey.pem . Note You must restart the container whenever you renew your certificate in order for the services to pick up the new certificate. If you automate certificate renewal, you should automate restarts as well. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented on the Certbot site . Note A small number of CAs in the IGTF and OSG CA cert distributions were signed with the SHA1 algorithm, which is not accepted by default starting with Enterprise Linux 9. If you are running an origin image based on OSG 23 or newer, and your situation includes any of the following: Your origin's host certificate was signed by a SHA1-signed-CA You are expecting connections from clients or caches that authenticate themselves with a cert from a SHA1-signed-CA then you must add ENABLE_SHA1=yes to the environment variable file. Populating Origin Data \u00b6 The OSDF namespace is shared by multiple VOs so you must choose a namespace for your own VO's data. When running an origin container, your chosen namespace must be reflected in your host partition. For example, if your host partition is /srv/origin and the name of your VO is ASTRO , you should store the Astro VO's public data in /srv/origin/astro/PUBLIC , and protected data in /srv/origin/astro/PROTECTED . When starting the container, mount /srv/origin/ into /xcache/namespace in the container, and set the environment variables XC_PUBLIC_ORIGIN_EXPORT=/astro/PUBLIC and XC_AUTH_ORIGIN_EXPORT=/astro/PROTECTED . You may omit XC_AUTH_ORIGIN_EXPORT if you are only serving public data, or omit XC_PUBLIC_ORIGIN_EXPORT if you are only serving protected data. If you omit both, the entire /srv/origin partition will be served as public data. Running the Origin \u00b6 It is recommended to use a container orchestration service such as docker-compose or kubernetes whose details are beyond the scope of this document. The following sections provide examples for starting origin containers from the command-line as well as a more production-appropriate method using systemd. user@host $ docker run --rm --publish 1094 :1094 --publish 1095 :1095 \\ --volume :/xcache/namespace \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --env-file = /opt/origin/.env \\ opensciencegrid/stash-origin:23-release Replacing with the host directory containing data that your origin should serve. See this section for details. Warning Unless configured otherwise via the env file /opt/origin/.env , a container deployed this way will serve the entire contents of . See the Configuring the Origin section for information on how to serve one subpath as public and another as protected. Note You may omit --publish 1094:1094 if you are only serving authenticated data, or omit --publish 1095:1095 if you are only serving public data. Running on origin container with systemd \u00b6 An example systemd service file for the OSDF. This will require creating the environment file in the directory /opt/origin/.env . Note This example systemd file assumes is /srv/origin , and the cert and key to use are in /etc/ssl/host.crt and /etc/ssl/host.key , respectively. Create the systemd service file /etc/systemd/system/docker.stash-origin.service as follows: [Unit] Description=Origin Container After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=-/usr/bin/docker stop %n ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/docker pull opensciencegrid/stash-origin:23-release ExecStart=/usr/bin/docker run --rm --name %n \\ --publish 1094:1094 \\ --publish 1095:1095 \\ --volume /srv/origin:/xcache/namespace \\ --volume /etc/ssl/host.crt:/etc/grid-security/hostcert.pem \\ --volume /etc/ssl/host.key:/etc/grid-security/hostkey.pem \\ --env-file /opt/origin/.env \\ opensciencegrid/stash-origin:23-release [Install] WantedBy=multi-user.target Enable and start the service with: root@host $ systemctl enable docker.stash-origin root@host $ systemctl start docker.stash-origin Warning Unless configured otherwise via the env file /opt/origin/.env , a container deployed this way will serve the entire contents of /srv/origin . See the Configuring the Origin section for information on how to serve one subpath as public and another as protected. Note You may omit --publish 1094:1094 if you are only serving authenticated data, or omit --publish 1095:1095 if you are only serving public data. Warning You must register the origin before starting it up. Validating the Origin \u00b6 To validate the origin please follow the validating origin instructions . Getting Help \u00b6 To get assistance, please use the this page .","title":"Install from container"},{"location":"data/stashcache/run-stash-origin-container/#running-osdf-origin-in-a-container","text":"The OSG operates the Open Science Data Federation (OSDF), which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data across sites or operate their own scalable infrastructure. Origins store copies of users' data. Each community (or experiment) needs to run one origin to export its data via the federation. This document outlines how to run such an origin in a Docker container. Note The OSDF Origin was previously named \"Stash Origin\" and some documentation and software may use the old name.","title":"Running OSDF Origin in a Container"},{"location":"data/stashcache/run-stash-origin-container/#before-starting","text":"Before starting the installation process, consider the following requirements: Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group). Network ports: The origin listens for incoming HTTP(S) and XRootD connections on ports 1094 and/or 1095. 1094 is used for serving public (unauthenticated) data, and 1095 is used for serving authenticated data. File Systems: The origin needs a host partition to store user data. Hardware requirements: We recommend that an origin has at least 1Gbps connectivity and 8GB of RAM. Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request host certificates. Registration: Before deploying an origin, you must register the service in the OSG Topology Note This document describes features introduced in XCache 3.2.2, released on 2022-09-29. You must use a version of the opensciencegrid/stash-origin image built after that date.","title":"Before Starting"},{"location":"data/stashcache/run-stash-origin-container/#configuring-the-origin","text":"In addition to the required configuration above (ports and file systems), you may also configure the behavior of your origin with the following variables using an environment variable file: Where the environment file on the docker host, /opt/origin/.env , has (at least) the following contents, replacing with the resource name of your origin as registered in Topology and with the public DNS name that should be used to contact your origin: XC_RESOURCENAME=YOUR_SITE_NAME ORIGIN_FQDN= In addition, define the following variables to specify which subpaths should be served as public (unauthenticated) data on port 1094, and which subpaths should be served as authenticated data on port 1095: XC_PUBLIC_ORIGIN_EXPORT=//PUBLIC XC_AUTH_ORIGIN_EXPORT=//PROTECTED These paths are relative to the host partition being served -- see the Populating Origin Data section below. If you only define XC_AUTH_ORIGIN_EXPORT , you will only serve data on port 1095. If you only define XC_PUBLIC_ORIGIN_EXPORT , you will only serve data on port 1094. If you do not define either, you will serve the entire host partition as public data on port 1094. Note For backward compatibility, XC_ORIGINEXPORT is accepted as an alias for XC_PUBLIC_ORIGIN_EXPORT .","title":"Configuring the Origin"},{"location":"data/stashcache/run-stash-origin-container/#providing-a-host-certificate","text":"The service will need a certificate for contacting central OSDF services and for authenticating connections. Follow our host certificate documentation to obtain a host certificate and key. Then, volume-mount the host certificate to /etc/grid-security/hostcert.pem , and the key to /etc/grid-security/hostkey.pem . Note You must restart the container whenever you renew your certificate in order for the services to pick up the new certificate. If you automate certificate renewal, you should automate restarts as well. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented on the Certbot site . Note A small number of CAs in the IGTF and OSG CA cert distributions were signed with the SHA1 algorithm, which is not accepted by default starting with Enterprise Linux 9. If you are running an origin image based on OSG 23 or newer, and your situation includes any of the following: Your origin's host certificate was signed by a SHA1-signed-CA You are expecting connections from clients or caches that authenticate themselves with a cert from a SHA1-signed-CA then you must add ENABLE_SHA1=yes to the environment variable file.","title":"Providing a host certificate"},{"location":"data/stashcache/run-stash-origin-container/#populating-origin-data","text":"The OSDF namespace is shared by multiple VOs so you must choose a namespace for your own VO's data. When running an origin container, your chosen namespace must be reflected in your host partition. For example, if your host partition is /srv/origin and the name of your VO is ASTRO , you should store the Astro VO's public data in /srv/origin/astro/PUBLIC , and protected data in /srv/origin/astro/PROTECTED . When starting the container, mount /srv/origin/ into /xcache/namespace in the container, and set the environment variables XC_PUBLIC_ORIGIN_EXPORT=/astro/PUBLIC and XC_AUTH_ORIGIN_EXPORT=/astro/PROTECTED . You may omit XC_AUTH_ORIGIN_EXPORT if you are only serving public data, or omit XC_PUBLIC_ORIGIN_EXPORT if you are only serving protected data. If you omit both, the entire /srv/origin partition will be served as public data.","title":"Populating Origin Data"},{"location":"data/stashcache/run-stash-origin-container/#running-the-origin","text":"It is recommended to use a container orchestration service such as docker-compose or kubernetes whose details are beyond the scope of this document. The following sections provide examples for starting origin containers from the command-line as well as a more production-appropriate method using systemd. user@host $ docker run --rm --publish 1094 :1094 --publish 1095 :1095 \\ --volume :/xcache/namespace \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --env-file = /opt/origin/.env \\ opensciencegrid/stash-origin:23-release Replacing with the host directory containing data that your origin should serve. See this section for details. Warning Unless configured otherwise via the env file /opt/origin/.env , a container deployed this way will serve the entire contents of . See the Configuring the Origin section for information on how to serve one subpath as public and another as protected. Note You may omit --publish 1094:1094 if you are only serving authenticated data, or omit --publish 1095:1095 if you are only serving public data.","title":"Running the Origin"},{"location":"data/stashcache/run-stash-origin-container/#running-on-origin-container-with-systemd","text":"An example systemd service file for the OSDF. This will require creating the environment file in the directory /opt/origin/.env . Note This example systemd file assumes is /srv/origin , and the cert and key to use are in /etc/ssl/host.crt and /etc/ssl/host.key , respectively. Create the systemd service file /etc/systemd/system/docker.stash-origin.service as follows: [Unit] Description=Origin Container After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=-/usr/bin/docker stop %n ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/docker pull opensciencegrid/stash-origin:23-release ExecStart=/usr/bin/docker run --rm --name %n \\ --publish 1094:1094 \\ --publish 1095:1095 \\ --volume /srv/origin:/xcache/namespace \\ --volume /etc/ssl/host.crt:/etc/grid-security/hostcert.pem \\ --volume /etc/ssl/host.key:/etc/grid-security/hostkey.pem \\ --env-file /opt/origin/.env \\ opensciencegrid/stash-origin:23-release [Install] WantedBy=multi-user.target Enable and start the service with: root@host $ systemctl enable docker.stash-origin root@host $ systemctl start docker.stash-origin Warning Unless configured otherwise via the env file /opt/origin/.env , a container deployed this way will serve the entire contents of /srv/origin . See the Configuring the Origin section for information on how to serve one subpath as public and another as protected. Note You may omit --publish 1094:1094 if you are only serving authenticated data, or omit --publish 1095:1095 if you are only serving public data. Warning You must register the origin before starting it up.","title":"Running on origin container with systemd"},{"location":"data/stashcache/run-stash-origin-container/#validating-the-origin","text":"To validate the origin please follow the validating origin instructions .","title":"Validating the Origin"},{"location":"data/stashcache/run-stash-origin-container/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"data/stashcache/run-stashcache-container/","text":"Running OSDF Cache in a Container \u00b6 The OSG operates the Open Science Data Federation (OSDF), which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data across sites or operate their own scalable infrastructure. OSDF Caches transfer data to clients such as jobs or users. A set of caches are operated across the OSG for the benefit of nearby sites; in addition, each site may run its own cache in order to reduce the amount of data transferred over the WAN. This document outlines how to run a cache in a Docker container. Note The OSDF cache was previously named \"Stash Cache\" and some documentation and software may use the old name. Before Starting \u00b6 Before starting the installation process, consider the following requirements: Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group). Network ports: The cache service requires the following open ports: Inbound TCP port 1094 for unauthenticated file access via the XRootD protocol (optional) Inbound TCP port 8000 for unauthenticated file access via HTTP(S) and/or Inbound TCP port 8443 for authenticated file access via HTTPS Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring File Systems: The cache needs host partitions to store user data. For improved performance and storage, we recommend multiple partitions for handling namespaces (HDD, SSD, or NVMe), data (HDDs), and metadata (SSDs or NVMe). Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request host certificates. Hardware requirements: We recommend that a cache has at least 10Gbps connectivity, 1 TB of disk space for the cache directory, and 12GB of RAM. Registering the Cache \u00b6 To be part of the OSDF, your cache must be registered with the OSG. You will need basic information like the resource name, hostname, host certificate DN, and the administrative and security contacts. Initial registration \u00b6 To register your cache host, follow the general registration instructions here . The service type is XRootD cache server . Info This step must be completed before installation. In your registration, you must specify which VOs your cache will serve by adding an AllowedVOs list, with each line specifying a VO whose data you are willing to cache. There are special values you may use in AllowedVOs : ANY_PUBLIC indicates that the cache is willing to serve public data from any VO. ANY indicates that the cache is willing to serve data from any VO, both public and protected. ANY implies ANY_PUBLIC . There are extra requirements for serving protected data: In addition to the cache allowing a VO in the AllowedVOs list, that VO must also allow the cache in its AllowedCaches list. See the page on getting your VO's data into OSDF . There must be an authenticated XRootD instance on the cache server. There must be a DN attribute in the resource registration with the subject DN of the host certificate This is an example registration for a cache server that serves all public data: MY_OSDF_CACHE : FQDN : my-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - ANY_PUBLIC This is an example registration for a cache server that only serves protected data for the Open Science Pool: MY_AUTH_OSDF_CACHE : FQDN : my-auth-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-auth-cache.example.net This is an example registration for a cache server that serves all public data and protected data from the OSG VO: MY_COMBO_OSDF_CACHE : FQDN : my-combo-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG - ANY_PUBLIC DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-combo-cache.example.net Configuring the OSDF Cache \u00b6 In addition to the required configuration above (ports and file systems), you may also configure the behavior of your cache with the following variables using an environment variable file: Where the environment file on the docker host, /opt/xcache/.env , has (at least) the following contents, replacing with the name of your resource as registered in Topology and with the public DNS name that should be used to contact your cache: XC_RESOURCENAME= CACHE_FQDN= Providing a host certificate \u00b6 The service will need a certificate for contacting central OSDF services and for authenticating to origins. Follow our host certificate documentation to obtain a host certificate and key. Then, volume-mount the host certificate to /etc/grid-security/hostcert.pem , and the key to /etc/grid-security/hostkey.pem . Note You must restart the container whenever you renew your certificate in order for the services to pick up the new certificate. If you automate certificate renewal, you should automate restarts as well. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented on the Certbot site . Note A small number of CAs in the IGTF and OSG CA cert distributions were signed with the SHA1 algorithm, which is not accepted by default starting with Enterprise Linux 9. If you are running a cache image based on OSG 23 or newer, and your situation includes any of the following: Your cache's host certificate was signed by a SHA1-signed-CA You are expecting connections from clients that authenticate themselves with a cert from a SHA1-signed-CA Your cache serves data from an origin whose host certificate was signed by a SHA1-signed-CA then you must add ENABLE_SHA1=yes to the environment variable file. Optional configuration \u00b6 Further behavior of the cache can be configured by setting the following in the environment variable file: XC_SPACE_HIGH_WM , XC_SPACE_LOW_WM : High-water and low-water marks for disk usage, as numbers between 0.00 (0%) and 1.00 (100%); when usage goes above the high-water mark, the cache will delete files until it hits the low-water mark. XC_RAMSIZE : Amount of memory to use for storing blocks before writting them to disk. (Use higher for slower disks). XC_BLOCKSIZE : Size of the blocks in the cache. XC_PREFETCH : Number of blocks to prefetch from a file at once. This controls how aggressive the cache is to request portions of a file. Running a Cache \u00b6 Cache containers may be run with either multiple mounted host partitions (recommended) or a single host partition. It is recommended to use a container orchestration service such as docker-compose or kubernetes whose details are beyond the scope of this document. The following sections provide examples for starting cache containers from the command-line as well as a more production-appropriate method using systemd. Multiple host partitions (recommended) \u00b6 For improved performance and storage, especially if your cache is serving over 10 TB of data, we recommend multiple partitions for handling namespaces (HDD, SSD, or NVMe), data (HDDs), and metadata (SSDs or NVMe). Note Under this configuration the is not used to store the files. Instead, the partition stores symlinks to the files in the metadata and data partitions. user@host $ docker run --rm \\ --publish :8000 \\ --publish :8443 \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --volume :/xcache/namespace \\ --volume :/xcache/meta1 ... --volume :/xcache/metaN --volume :/xcache/data1 ... --volume :/xcache/dataN --env-file=/opt/xcache/.env \\ opensciencegrid/stash-cache:23-release Warning For over 10 TB of assigned space we highly encourage to use this setup and mount in solid state disks or NVMe. Single host partition \u00b6 For a simpler installation, you may use a single host partition mounted to /xcache/ : user@host $ docker run --rm \\ --publish :8000 \\ --publish :8443 \\ --volume :/xcache \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --env-file = /opt/xcache/.env \\ opensciencegrid/stash-cache:23-release Running a cache on container with systemd \u00b6 An example systemd service file for the OSDF cache. This will require creating the environment file in the directory /opt/xcache/.env . Note This example systemd file assumes is 8000 , is 8443 , is /srv/cache , and the cert and key to use are in /etc/ssl/host.crt and /etc/ssl/host.key , respectively. Create the systemd service file /etc/systemd/system/docker.stash-cache.service as follows: [Unit] Description=Cache Container After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=-/usr/bin/docker stop %n ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/docker pull opensciencegrid/stash-cache:23-release ExecStart=/usr/bin/docker run --rm --name %n \\ --publish 8000:8000 \\ --publish 8443:8443 \\ --volume /srv/cache:/xcache \\ --volume /etc/ssl/host.crt:/etc/grid-security/hostcert.pem \\ --volume /etc/ssl/host.key:/etc/grid-security/hostkey.pem \\ --env-file /opt/xcache/.env \\ opensciencegrid/stash-cache:23-release [Install] WantedBy=multi-user.target Enable and start the service with: root@host $ systemctl enable docker.stash-cache root@host $ systemctl start docker.stash-cache Warning You must register the cache before starting it up. Network optimization \u00b6 For caches that are connected to NICs over 40 Gbps we recommend that you disable the virtualized network and \"bind\" the container to the host network: user@host $ docker run --rm \\ --network = \"host\" \\ --volume :/cache \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --env-file = /opt/xcache/.env \\ opensciencegrid/stash-cache:23-release Memory optimization \u00b6 The cache uses the host's memory for two purposes: Caching files recently read from disk (via the kernel page cache). Buffering files recently received from the network before writing them to disk (to compensate for slow disks). An easy way to increase the performance of the cache is to assign it more memory. If you set a limit on the container's memory usage via the docker option --memory or Kubernetes resource limits, make sure it is at least twice the value of XC_RAMSIZE . Validating the Cache \u00b6 The cache server functions as a normal HTTP server and can interact with typical HTTP clients, such as curl . Here, is the port chosen in the docker run command, 8000 by default. user@host $ curl -O http://cache_host:/ospool/uc-shared/public/OSG-Staff/validation/test.txt curl may not correctly report a failure, so verify that the contents of the file are: Hello, World! Getting Help \u00b6 To get assistance, please use the this page .","title":"Install from container"},{"location":"data/stashcache/run-stashcache-container/#running-osdf-cache-in-a-container","text":"The OSG operates the Open Science Data Federation (OSDF), which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data across sites or operate their own scalable infrastructure. OSDF Caches transfer data to clients such as jobs or users. A set of caches are operated across the OSG for the benefit of nearby sites; in addition, each site may run its own cache in order to reduce the amount of data transferred over the WAN. This document outlines how to run a cache in a Docker container. Note The OSDF cache was previously named \"Stash Cache\" and some documentation and software may use the old name.","title":"Running OSDF Cache in a Container"},{"location":"data/stashcache/run-stashcache-container/#before-starting","text":"Before starting the installation process, consider the following requirements: Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group). Network ports: The cache service requires the following open ports: Inbound TCP port 1094 for unauthenticated file access via the XRootD protocol (optional) Inbound TCP port 8000 for unauthenticated file access via HTTP(S) and/or Inbound TCP port 8443 for authenticated file access via HTTPS Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring File Systems: The cache needs host partitions to store user data. For improved performance and storage, we recommend multiple partitions for handling namespaces (HDD, SSD, or NVMe), data (HDDs), and metadata (SSDs or NVMe). Host certificate: Required for authentication. See our host certificate documentation for instructions on how to request host certificates. Hardware requirements: We recommend that a cache has at least 10Gbps connectivity, 1 TB of disk space for the cache directory, and 12GB of RAM.","title":"Before Starting"},{"location":"data/stashcache/run-stashcache-container/#registering-the-cache","text":"To be part of the OSDF, your cache must be registered with the OSG. You will need basic information like the resource name, hostname, host certificate DN, and the administrative and security contacts.","title":"Registering the Cache"},{"location":"data/stashcache/run-stashcache-container/#initial-registration","text":"To register your cache host, follow the general registration instructions here . The service type is XRootD cache server . Info This step must be completed before installation. In your registration, you must specify which VOs your cache will serve by adding an AllowedVOs list, with each line specifying a VO whose data you are willing to cache. There are special values you may use in AllowedVOs : ANY_PUBLIC indicates that the cache is willing to serve public data from any VO. ANY indicates that the cache is willing to serve data from any VO, both public and protected. ANY implies ANY_PUBLIC . There are extra requirements for serving protected data: In addition to the cache allowing a VO in the AllowedVOs list, that VO must also allow the cache in its AllowedCaches list. See the page on getting your VO's data into OSDF . There must be an authenticated XRootD instance on the cache server. There must be a DN attribute in the resource registration with the subject DN of the host certificate This is an example registration for a cache server that serves all public data: MY_OSDF_CACHE : FQDN : my-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - ANY_PUBLIC This is an example registration for a cache server that only serves protected data for the Open Science Pool: MY_AUTH_OSDF_CACHE : FQDN : my-auth-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-auth-cache.example.net This is an example registration for a cache server that serves all public data and protected data from the OSG VO: MY_COMBO_OSDF_CACHE : FQDN : my-combo-cache.example.net Services : XRootD cache server : Description : OSDF cache server AllowedVOs : - OSG - ANY_PUBLIC DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=my-combo-cache.example.net","title":"Initial registration"},{"location":"data/stashcache/run-stashcache-container/#configuring-the-osdf-cache","text":"In addition to the required configuration above (ports and file systems), you may also configure the behavior of your cache with the following variables using an environment variable file: Where the environment file on the docker host, /opt/xcache/.env , has (at least) the following contents, replacing with the name of your resource as registered in Topology and with the public DNS name that should be used to contact your cache: XC_RESOURCENAME= CACHE_FQDN=","title":"Configuring the OSDF Cache"},{"location":"data/stashcache/run-stashcache-container/#providing-a-host-certificate","text":"The service will need a certificate for contacting central OSDF services and for authenticating to origins. Follow our host certificate documentation to obtain a host certificate and key. Then, volume-mount the host certificate to /etc/grid-security/hostcert.pem , and the key to /etc/grid-security/hostkey.pem . Note You must restart the container whenever you renew your certificate in order for the services to pick up the new certificate. If you automate certificate renewal, you should automate restarts as well. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented on the Certbot site . Note A small number of CAs in the IGTF and OSG CA cert distributions were signed with the SHA1 algorithm, which is not accepted by default starting with Enterprise Linux 9. If you are running a cache image based on OSG 23 or newer, and your situation includes any of the following: Your cache's host certificate was signed by a SHA1-signed-CA You are expecting connections from clients that authenticate themselves with a cert from a SHA1-signed-CA Your cache serves data from an origin whose host certificate was signed by a SHA1-signed-CA then you must add ENABLE_SHA1=yes to the environment variable file.","title":"Providing a host certificate"},{"location":"data/stashcache/run-stashcache-container/#optional-configuration","text":"Further behavior of the cache can be configured by setting the following in the environment variable file: XC_SPACE_HIGH_WM , XC_SPACE_LOW_WM : High-water and low-water marks for disk usage, as numbers between 0.00 (0%) and 1.00 (100%); when usage goes above the high-water mark, the cache will delete files until it hits the low-water mark. XC_RAMSIZE : Amount of memory to use for storing blocks before writting them to disk. (Use higher for slower disks). XC_BLOCKSIZE : Size of the blocks in the cache. XC_PREFETCH : Number of blocks to prefetch from a file at once. This controls how aggressive the cache is to request portions of a file.","title":"Optional configuration"},{"location":"data/stashcache/run-stashcache-container/#running-a-cache","text":"Cache containers may be run with either multiple mounted host partitions (recommended) or a single host partition. It is recommended to use a container orchestration service such as docker-compose or kubernetes whose details are beyond the scope of this document. The following sections provide examples for starting cache containers from the command-line as well as a more production-appropriate method using systemd.","title":"Running a Cache"},{"location":"data/stashcache/run-stashcache-container/#multiple-host-partitions-recommended","text":"For improved performance and storage, especially if your cache is serving over 10 TB of data, we recommend multiple partitions for handling namespaces (HDD, SSD, or NVMe), data (HDDs), and metadata (SSDs or NVMe). Note Under this configuration the is not used to store the files. Instead, the partition stores symlinks to the files in the metadata and data partitions. user@host $ docker run --rm \\ --publish :8000 \\ --publish :8443 \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --volume :/xcache/namespace \\ --volume :/xcache/meta1 ... --volume :/xcache/metaN --volume :/xcache/data1 ... --volume :/xcache/dataN --env-file=/opt/xcache/.env \\ opensciencegrid/stash-cache:23-release Warning For over 10 TB of assigned space we highly encourage to use this setup and mount in solid state disks or NVMe.","title":"Multiple host partitions (recommended)"},{"location":"data/stashcache/run-stashcache-container/#single-host-partition","text":"For a simpler installation, you may use a single host partition mounted to /xcache/ : user@host $ docker run --rm \\ --publish :8000 \\ --publish :8443 \\ --volume :/xcache \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --env-file = /opt/xcache/.env \\ opensciencegrid/stash-cache:23-release","title":"Single host partition"},{"location":"data/stashcache/run-stashcache-container/#running-a-cache-on-container-with-systemd","text":"An example systemd service file for the OSDF cache. This will require creating the environment file in the directory /opt/xcache/.env . Note This example systemd file assumes is 8000 , is 8443 , is /srv/cache , and the cert and key to use are in /etc/ssl/host.crt and /etc/ssl/host.key , respectively. Create the systemd service file /etc/systemd/system/docker.stash-cache.service as follows: [Unit] Description=Cache Container After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=-/usr/bin/docker stop %n ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/docker pull opensciencegrid/stash-cache:23-release ExecStart=/usr/bin/docker run --rm --name %n \\ --publish 8000:8000 \\ --publish 8443:8443 \\ --volume /srv/cache:/xcache \\ --volume /etc/ssl/host.crt:/etc/grid-security/hostcert.pem \\ --volume /etc/ssl/host.key:/etc/grid-security/hostkey.pem \\ --env-file /opt/xcache/.env \\ opensciencegrid/stash-cache:23-release [Install] WantedBy=multi-user.target Enable and start the service with: root@host $ systemctl enable docker.stash-cache root@host $ systemctl start docker.stash-cache Warning You must register the cache before starting it up.","title":"Running a cache on container with systemd"},{"location":"data/stashcache/run-stashcache-container/#network-optimization","text":"For caches that are connected to NICs over 40 Gbps we recommend that you disable the virtualized network and \"bind\" the container to the host network: user@host $ docker run --rm \\ --network = \"host\" \\ --volume :/cache \\ --volume :/etc/grid-security/hostcert.pem \\ --volume :/etc/grid-security/hostkey.pem \\ --env-file = /opt/xcache/.env \\ opensciencegrid/stash-cache:23-release","title":"Network optimization"},{"location":"data/stashcache/run-stashcache-container/#memory-optimization","text":"The cache uses the host's memory for two purposes: Caching files recently read from disk (via the kernel page cache). Buffering files recently received from the network before writing them to disk (to compensate for slow disks). An easy way to increase the performance of the cache is to assign it more memory. If you set a limit on the container's memory usage via the docker option --memory or Kubernetes resource limits, make sure it is at least twice the value of XC_RAMSIZE .","title":"Memory optimization"},{"location":"data/stashcache/run-stashcache-container/#validating-the-cache","text":"The cache server functions as a normal HTTP server and can interact with typical HTTP clients, such as curl . Here, is the port chosen in the docker run command, 8000 by default. user@host $ curl -O http://cache_host:/ospool/uc-shared/public/OSG-Staff/validation/test.txt curl may not correctly report a failure, so verify that the contents of the file are: Hello, World!","title":"Validating the Cache"},{"location":"data/stashcache/run-stashcache-container/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"data/stashcache/vo-data/","text":"Getting VO Data into the OSDF \u00b6 This document describes the steps required to manage a VO's role in the Open Science Data Federation (OSDF) including selecting a namespace, registration, and selecting which resources are allowed to host or cache your data. For general information about the OSDF, see the overview document . Site admins should work together with VO managers in order to perform these steps. Definitions \u00b6 Namespace: a directory tree in the federation that is used to find VO data. Public data: data that can be read by anyone. Protected data: data that requires authorization to read. Requirements \u00b6 In order for a Virtual Organization to join the federation, the VO must already be registered in OSG Topology. See the registration document . Choosing Namespaces \u00b6 The VO must pick one or more \"namespaces\" for their data. A namespace is a directory tree in the federation where VO data is found. Note Namespaces are global across the federation, so you must work with the OSG Operations team to ensure that your VO's namespaces do not collide with those of another VO. Send an email to help@osg-htc.org with the following subject: \"Requesting OSDF namespaces for VO \" and put the desired namespaces in the body of the email. A namespace should be easy for your users to remember but not so generic that it collides with other VOs. We recommend using the lowercase version of your VO as the top-level directory. In addition, public data, if any, should be stored in a subdirectory named PUBLIC , and protected data, if any, should be stored in a subdirectory named PROTECTED . Putting this together, if your VO is named Astro , you should have: /astro/PUBLIC for public data /astro/PROTECTED for protected data Separating the public and protected data in separate directory trees is preferred for technical reasons. Registering Data Federation Information \u00b6 The VO must allow one or more origins to host their data. An origin will typically be hosted on a site owned by the VO. For information about setting up an origin, see the installation document . In order to declare your VO's role in the federation, you must add OSDF information to your VO's YAML file in the OSG Topology repository. For example, the full registration for the Astro VO may look something like the following: DataFederations : StashCache : Namespaces : - Path : /astro/PUBLIC Authorizations : - PUBLIC AllowedCaches : - ANY AllowedOrigins : - ASTRO_OSDF_ORIGIN - Path : /astro/PROTECTED Authorizations : - FQAN : /Astro - DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=People/CN=Matyas Selmeci - SciTokens : Issuer : https://astro.org Base Path : /astro/PROTECTED AllowedCaches : - ASTRO_EAST_CACHE - ASTRO_WEST_CACHE AllowedOrigins : - ASTRO_AUTH_OSDF_ORIGIN The sections are described below. Namespaces section \u00b6 In the namespaces section, you will declare one or more namespaces. A namespace is a directory tree in the data federation that is owned by a VO/collaboration. Each namespace requires: a Path that is the path to the directory tree, e.g. /astro/PUBLIC an Authorizations list which describes how users are authorized to access data within the namespace an AllowedCaches list of the OSDF caches that are allowed to cache the data within the namespace an AllowedOrigins list of the OSDF origins that are allowed to serve the data within the namespace In addition, a namespace may have the following optional attributes: a Writeback endpoint that is an HTTPS URL like https://stash-xrd.osgconnect.net:1094 that can be used for jobs to write data to the origin a DirList endpoint that is an HTTPS URL like https://origin-auth2001.chtc.wisc.edu:1095 that can be used for getting a directory listing of that namespace Authorizations list \u00b6 The Authorizations list of each namespace describes how a user can get authorized in order to access the data within the namespace. The list will contain one or more of these: FQAN: allows someone using a proxy with the specified VOMS FQAN DN: allows someone using a proxy with that specific DN PUBLIC allows anyone; this is used for public data SciTokens allows someone using a SciToken with the given parameters, which are described below A complete declaration looks like: Namespaces : - Path : /astro/PUBLIC Authorizations : - PUBLIC AllowedCaches : ... AllowedOrigins : ... - Path : /astro/PROTECTED Authorizations : - FQAN : /Astro - DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=People/CN=Matyas Selmeci - SciTokens : Issuer : https://astro.org Base Path : /astro/PROTECTED Map Subject : True AllowedCaches : ... AllowedOrigins : ... This declares two namespaces: /astro/PUBLIC for public data, and /astro/PROTECTED which can only be read by someone with the /Astro FQAN, by Matyas Selmeci, or by someone with a SciToken issued by https://astro.org . SciTokens \u00b6 A SciTokens authorization has multiple parameters: Issuer (required) is the token issuer of the SciToken that the authorization accepts. Base Path (required) is a path that will be prepended to the scopes of the token in order to construct the full path to the file(s) that the bearer of the token is allowed to access. For example, if Base Path is set to /astro/PROTECTED then a token with the scope read:/matyas will have the permission to read from the directory tree under /astro/PROTECTED/matyas . The correct value for Base Path depends on how the issuer is set up, but we recommend that you set Base Path to the namespace path, and configure the issuer to create scopes relative to the namespace path. Map Subject (optional, False if not specified) should be set to True if the origin uses the XRootD-Multiuser plugin. It will cause the origin to use the token subject ( sub field) to map to a Unix user in order to access files. Restricted Path (optional) is a further restriction on paths the token is allowed to access. Only tokens whose scopes start with the Restricted Path will be accepted. Use this only if your issuer does not create relative scopes. AllowedCaches list \u00b6 The VO must allow one or more OSDF caches to cache their data. The more places a VO's data can be cached in, the bigger the data transfer benefit for the VO. The majority of caches across OSG will automatically cache all \"public\" VO data. Caching \"protected\" VO data will often be done on a site owned by the VO. For information about setting up a cache, see the installation document . AllowedCaches is a list of which caches are allowed to host copies of your data. There are two cases: If you only have public data, your AllowedCaches list can look like: AllowedCaches : - ANY This allows any cache to host a copy of your data. If you have some protected data, then AllowedCaches is a list of resources that are allowed to cache your data. A resource is an entry in a /topology///.yaml file, for example CHTC_OSDF_CACHE . The following requirements must be met for the resource: It must have an \"XRootD cache server\" service It must have an AllowedVOs list that includes either your VO, \"ANY\", or \"ANY_PUBLIC\" It must have a DN attribute with the DN of its host cert AllowedOrigins list \u00b6 AllowedOrigins is a list of which origins are allowed to host your data. This is a list of resources . A resource is an entry in a /topology///.yaml file, for example CHTC_OSDF_ORIGIN . The following requirements must be met for the resource: It must have an \"XRootD origin server\" service It must have an AllowedVOs list that includes either your VO or \"ANY\"","title":"Publishing VO data"},{"location":"data/stashcache/vo-data/#getting-vo-data-into-the-osdf","text":"This document describes the steps required to manage a VO's role in the Open Science Data Federation (OSDF) including selecting a namespace, registration, and selecting which resources are allowed to host or cache your data. For general information about the OSDF, see the overview document . Site admins should work together with VO managers in order to perform these steps.","title":"Getting VO Data into the OSDF"},{"location":"data/stashcache/vo-data/#definitions","text":"Namespace: a directory tree in the federation that is used to find VO data. Public data: data that can be read by anyone. Protected data: data that requires authorization to read.","title":"Definitions"},{"location":"data/stashcache/vo-data/#requirements","text":"In order for a Virtual Organization to join the federation, the VO must already be registered in OSG Topology. See the registration document .","title":"Requirements"},{"location":"data/stashcache/vo-data/#choosing-namespaces","text":"The VO must pick one or more \"namespaces\" for their data. A namespace is a directory tree in the federation where VO data is found. Note Namespaces are global across the federation, so you must work with the OSG Operations team to ensure that your VO's namespaces do not collide with those of another VO. Send an email to help@osg-htc.org with the following subject: \"Requesting OSDF namespaces for VO \" and put the desired namespaces in the body of the email. A namespace should be easy for your users to remember but not so generic that it collides with other VOs. We recommend using the lowercase version of your VO as the top-level directory. In addition, public data, if any, should be stored in a subdirectory named PUBLIC , and protected data, if any, should be stored in a subdirectory named PROTECTED . Putting this together, if your VO is named Astro , you should have: /astro/PUBLIC for public data /astro/PROTECTED for protected data Separating the public and protected data in separate directory trees is preferred for technical reasons.","title":"Choosing Namespaces"},{"location":"data/stashcache/vo-data/#registering-data-federation-information","text":"The VO must allow one or more origins to host their data. An origin will typically be hosted on a site owned by the VO. For information about setting up an origin, see the installation document . In order to declare your VO's role in the federation, you must add OSDF information to your VO's YAML file in the OSG Topology repository. For example, the full registration for the Astro VO may look something like the following: DataFederations : StashCache : Namespaces : - Path : /astro/PUBLIC Authorizations : - PUBLIC AllowedCaches : - ANY AllowedOrigins : - ASTRO_OSDF_ORIGIN - Path : /astro/PROTECTED Authorizations : - FQAN : /Astro - DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=People/CN=Matyas Selmeci - SciTokens : Issuer : https://astro.org Base Path : /astro/PROTECTED AllowedCaches : - ASTRO_EAST_CACHE - ASTRO_WEST_CACHE AllowedOrigins : - ASTRO_AUTH_OSDF_ORIGIN The sections are described below.","title":"Registering Data Federation Information"},{"location":"data/stashcache/vo-data/#namespaces-section","text":"In the namespaces section, you will declare one or more namespaces. A namespace is a directory tree in the data federation that is owned by a VO/collaboration. Each namespace requires: a Path that is the path to the directory tree, e.g. /astro/PUBLIC an Authorizations list which describes how users are authorized to access data within the namespace an AllowedCaches list of the OSDF caches that are allowed to cache the data within the namespace an AllowedOrigins list of the OSDF origins that are allowed to serve the data within the namespace In addition, a namespace may have the following optional attributes: a Writeback endpoint that is an HTTPS URL like https://stash-xrd.osgconnect.net:1094 that can be used for jobs to write data to the origin a DirList endpoint that is an HTTPS URL like https://origin-auth2001.chtc.wisc.edu:1095 that can be used for getting a directory listing of that namespace","title":"Namespaces section"},{"location":"data/stashcache/vo-data/#authorizations-list","text":"The Authorizations list of each namespace describes how a user can get authorized in order to access the data within the namespace. The list will contain one or more of these: FQAN: allows someone using a proxy with the specified VOMS FQAN DN: allows someone using a proxy with that specific DN PUBLIC allows anyone; this is used for public data SciTokens allows someone using a SciToken with the given parameters, which are described below A complete declaration looks like: Namespaces : - Path : /astro/PUBLIC Authorizations : - PUBLIC AllowedCaches : ... AllowedOrigins : ... - Path : /astro/PROTECTED Authorizations : - FQAN : /Astro - DN : /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=People/CN=Matyas Selmeci - SciTokens : Issuer : https://astro.org Base Path : /astro/PROTECTED Map Subject : True AllowedCaches : ... AllowedOrigins : ... This declares two namespaces: /astro/PUBLIC for public data, and /astro/PROTECTED which can only be read by someone with the /Astro FQAN, by Matyas Selmeci, or by someone with a SciToken issued by https://astro.org .","title":"Authorizations list"},{"location":"data/stashcache/vo-data/#scitokens","text":"A SciTokens authorization has multiple parameters: Issuer (required) is the token issuer of the SciToken that the authorization accepts. Base Path (required) is a path that will be prepended to the scopes of the token in order to construct the full path to the file(s) that the bearer of the token is allowed to access. For example, if Base Path is set to /astro/PROTECTED then a token with the scope read:/matyas will have the permission to read from the directory tree under /astro/PROTECTED/matyas . The correct value for Base Path depends on how the issuer is set up, but we recommend that you set Base Path to the namespace path, and configure the issuer to create scopes relative to the namespace path. Map Subject (optional, False if not specified) should be set to True if the origin uses the XRootD-Multiuser plugin. It will cause the origin to use the token subject ( sub field) to map to a Unix user in order to access files. Restricted Path (optional) is a further restriction on paths the token is allowed to access. Only tokens whose scopes start with the Restricted Path will be accepted. Use this only if your issuer does not create relative scopes.","title":"SciTokens"},{"location":"data/stashcache/vo-data/#allowedcaches-list","text":"The VO must allow one or more OSDF caches to cache their data. The more places a VO's data can be cached in, the bigger the data transfer benefit for the VO. The majority of caches across OSG will automatically cache all \"public\" VO data. Caching \"protected\" VO data will often be done on a site owned by the VO. For information about setting up a cache, see the installation document . AllowedCaches is a list of which caches are allowed to host copies of your data. There are two cases: If you only have public data, your AllowedCaches list can look like: AllowedCaches : - ANY This allows any cache to host a copy of your data. If you have some protected data, then AllowedCaches is a list of resources that are allowed to cache your data. A resource is an entry in a /topology///.yaml file, for example CHTC_OSDF_CACHE . The following requirements must be met for the resource: It must have an \"XRootD cache server\" service It must have an AllowedVOs list that includes either your VO, \"ANY\", or \"ANY_PUBLIC\" It must have a DN attribute with the DN of its host cert","title":"AllowedCaches list"},{"location":"data/stashcache/vo-data/#allowedorigins-list","text":"AllowedOrigins is a list of which origins are allowed to host your data. This is a list of resources . A resource is an entry in a /topology///.yaml file, for example CHTC_OSDF_ORIGIN . The following requirements must be met for the resource: It must have an \"XRootD origin server\" service It must have an AllowedVOs list that includes either your VO or \"ANY\"","title":"AllowedOrigins list"},{"location":"data/xrootd/install-client/","text":"Using XRootD \u00b6 XRootD is a high performance data system widely used by several science VOs on OSG to store and to distribute data to jobs. It can be used to create a data store from distributed data nodes or to serve data to systems using a distributed caching architecture. Either mode of operation requires you to install the XRootD client software. This page provides instructions for accessing data on XRootD data systems using a variety of methods. As a user you have three different ways to interact with XRootD: Using the XRootD clients Using a XRootDFS FUSE mount to access a local XRootD data store Using LD_PRELOAD to use XRootD libraries with Unix tools We'll show how to install the XRootD client software and use all three mechanisms to access data. Note Only the client tools method should be used to access XRootD systems across a WAN link. Before Starting \u00b6 As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Install CA certificates If you are using the FUSE mount, you should also consider the following requirement: User IDs: If it does not exist already, you will need to create a xrootd user Using the XRootD client software \u00b6 Installing the XRootD Client \u00b6 If you are planning on interacting with XRootD using the XRootD client, then you'll need to install the XRootD client RPM. Installing the XRootD Client RPM \u00b6 The following steps will install the rpm on your system. Clean yum cache: root@client $ yum clean all --enablerepo = \\* Update software: root@client $ yum update This command will update all packages Install XRootD Client rpm: root@client $ yum install xrootd-client Using the XRootD Client \u00b6 Once the xrootd-client rpm is installed, you should be able to use the xrdcp command to copy files to and from XRootD systems and the local file system. For example: user@client $ echo \"This is a test\" >/tmp/test user@client $ xrdcp /tmp/test xroot://redirector.domain.org:1094//storage/path/test user@client $ xrdcp xroot://redirector.domain.org:1094//storage/path/test /tmp/test1 user@client $ diff /tmp/test1 /tmp/test For other operations, you'll need to use the xrdfs command. This command allows you to do file operations such as creating directories, removing directories, deleting files, and moving files on a XRootD system, provided you have the appropriate authorization. The xrdfs command can be used interactively by running xrdfs xroot://redirector.domain.org:1094/ . Alternatively, you can use it in batch mode by adding the xrdfs command after the xroot URI. For example: user@client $ echo \"This is a test\" >/tmp/test user@client $ xrdfs xroot://redirector.domain.org:1094/ mkdir /storage/path/test user@client $ xrdcp xroot://redirector.domain.org:1094//storage/path/test/test1 /tmp/test1 user@client $ xrdfs xroot://redirector.domain.org:1094/ ls /storage/path/test/test1 user@client $ xrdfs xroot://redirector.domain.org:1094/ rm /storage/path/test/test1 user@client $ xrdfs xroot://redirector.domain.org:1094/ rmdir /storage/path/test Note To access remote XRootD resources, you will may need to use a VOMS proxy in order to authenticate successfully. The XRootD client tools will automatically locate your proxy if you generate it using voms-proxy-init , otherwise you can set the X509_USER_PROXY environment variable to the location of the proxy XRootD should use. Validation \u00b6 Assuming that there is a file called test_file in your XRootD data store, you can do the following to validate your installation. Here we assume that there is a file on your XRootD system at /storage/path/test_file . user@client $ xrdcp xroot://redirector.yourdomain.org:1094//storage/path/test_file /tmp/test1 Using XRootDFS FUSE mount \u00b6 This section will explain how to install, setup, and interact with XRootD using a FUSE mount. This method of accessing XRootD only works when accessing a local XRootD system. Installing the XRootD FUSE RPM \u00b6 If you are planning on using a FUSE mount, you'll need to install the xrootd-fuse rpm by running the following commands: Clean yum cache: root@client $ yum clean all --enablerepo = \\* Update software: root@client $ yum update Install XRootD FUSE rpm: root@client $ yum install xrootd-fuse Configuring the FUSE Mount \u00b6 Once the appropriate RPMs are installed, the FUSE setup will need further configuration. Modify /etc/fstab by adding the following entries: .... xrootdfs /mnt/xrootd fuse rdr=xroot://:1094/,uid=xrootd 0 0 Replace /mnt/xrootd with the path that you would like to access with. Create /mnt/xrootd directory. Make sure the xrootd user exists on the system. Once you are finished, you can mount it: mount /mnt/xrootd You should now be able to run UNIX commands such as ls /mnt/xrootd to see the contents of the XRootD server. Using the XRootDFS FUSE Mount \u00b6 The directory mounted using XRootDFS can be used as any other directory mounted on your file system. All the normal Unix commands should work out of the box. Try using cp , rm , mv , mkdir , rmdir . Assuming your mount is /mnt/xrootd : user@client $ echo \"This is a new test\" >/tmp/test user@client $ mkdir -p /mnt/xrootd/subdir/sub2 user@client $ cp /tmp/test /mnt/xrootd/subdir/sub2/test user@client $ cp /mnt/xrootd/subdir/sub2/test /mnt/xrootd/subdir/sub2/test1 user@client $ cp /mnt/xuserd/subdir/sub2/test1 /tmp/test1 user@client $ diff /tmp/test1 /tmp/test user@client $ rm -r /mnt/xrootd/subdir Validation \u00b6 Assuming your mount is /mnt/xrootd and that there is a file called test_file in your XRootD data store: user@client $ cp /mnt/xrootd/test_file /tmp/test1 Using LD_PRELOAD to access XRootD \u00b6 Installing XRootD Libraries For LD_PRELOAD \u00b6 In order to use LD_PRELOAD to access XRootD, you'll need to install the XRootD client libraries. The following steps will install them on your system: Clean yum cache: root@client $ yum clean all --enablerepo = \\* Update software: root@client $ yum update This command will update all packages Install XRootD Client rpm: root@client $ yum install xrootd-client Using LD_PRELOAD method \u00b6 In order to use the LD_PRELOAD method to access a XRootD data store, you'll need to change your environment to use the XRootD libraries in conjunction with the standard Unix binaries. This is done by setting the LD_PRELOAD environment variable. Once this is done, the standard unix commands like mkdir , rm , cp , etc. will work with xroot URIs. For example: user@client $ export LD_PRELOAD = /usr/lib64/libXrdPosixPreload.so user@client $ echo \"This is a new test\" >/tmp/test user@client $ mkdir xroot://redirector.yourdomain.org:1094//storage/path/subdir user@client $ cp /tmp/test xroot://redirector.yourdomain.org:1094//storage/path/subdir/test user@client $ cp xuser://redirector.yourdomain.org:1094//storage/path/subdir/test /tmp/test1 user@client $ diff /tmp/test1 /tmp/test user@client $ rm xroot://redirector.yourdomain.org:1094//storage/path/subdir/test user@client $ rmdir xroot://redirector.yourdomain.org:1094//storage/path/subdir Validation \u00b6 Assuming that there is a file called test_file in your XRootD data store, the following steps will validate your installation: user@client $ export LD_PRELOAD = /usr/lib64/libXrdPosixPreload.so user@client $ cp xroot://redirector.yourdomain.org:1094//storage/path/test_file /tmp/test1 How to get Help? \u00b6 If you cannot resolve the problem, please consult this page for assistance..","title":"Using XRootD"},{"location":"data/xrootd/install-client/#using-xrootd","text":"XRootD is a high performance data system widely used by several science VOs on OSG to store and to distribute data to jobs. It can be used to create a data store from distributed data nodes or to serve data to systems using a distributed caching architecture. Either mode of operation requires you to install the XRootD client software. This page provides instructions for accessing data on XRootD data systems using a variety of methods. As a user you have three different ways to interact with XRootD: Using the XRootD clients Using a XRootDFS FUSE mount to access a local XRootD data store Using LD_PRELOAD to use XRootD libraries with Unix tools We'll show how to install the XRootD client software and use all three mechanisms to access data. Note Only the client tools method should be used to access XRootD systems across a WAN link.","title":"Using XRootD"},{"location":"data/xrootd/install-client/#before-starting","text":"As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Install CA certificates If you are using the FUSE mount, you should also consider the following requirement: User IDs: If it does not exist already, you will need to create a xrootd user","title":"Before Starting"},{"location":"data/xrootd/install-client/#using-the-xrootd-client-software","text":"","title":"Using the XRootD client software"},{"location":"data/xrootd/install-client/#installing-the-xrootd-client","text":"If you are planning on interacting with XRootD using the XRootD client, then you'll need to install the XRootD client RPM.","title":"Installing the XRootD Client"},{"location":"data/xrootd/install-client/#installing-the-xrootd-client-rpm","text":"The following steps will install the rpm on your system. Clean yum cache: root@client $ yum clean all --enablerepo = \\* Update software: root@client $ yum update This command will update all packages Install XRootD Client rpm: root@client $ yum install xrootd-client","title":"Installing the XRootD Client RPM"},{"location":"data/xrootd/install-client/#using-the-xrootd-client","text":"Once the xrootd-client rpm is installed, you should be able to use the xrdcp command to copy files to and from XRootD systems and the local file system. For example: user@client $ echo \"This is a test\" >/tmp/test user@client $ xrdcp /tmp/test xroot://redirector.domain.org:1094//storage/path/test user@client $ xrdcp xroot://redirector.domain.org:1094//storage/path/test /tmp/test1 user@client $ diff /tmp/test1 /tmp/test For other operations, you'll need to use the xrdfs command. This command allows you to do file operations such as creating directories, removing directories, deleting files, and moving files on a XRootD system, provided you have the appropriate authorization. The xrdfs command can be used interactively by running xrdfs xroot://redirector.domain.org:1094/ . Alternatively, you can use it in batch mode by adding the xrdfs command after the xroot URI. For example: user@client $ echo \"This is a test\" >/tmp/test user@client $ xrdfs xroot://redirector.domain.org:1094/ mkdir /storage/path/test user@client $ xrdcp xroot://redirector.domain.org:1094//storage/path/test/test1 /tmp/test1 user@client $ xrdfs xroot://redirector.domain.org:1094/ ls /storage/path/test/test1 user@client $ xrdfs xroot://redirector.domain.org:1094/ rm /storage/path/test/test1 user@client $ xrdfs xroot://redirector.domain.org:1094/ rmdir /storage/path/test Note To access remote XRootD resources, you will may need to use a VOMS proxy in order to authenticate successfully. The XRootD client tools will automatically locate your proxy if you generate it using voms-proxy-init , otherwise you can set the X509_USER_PROXY environment variable to the location of the proxy XRootD should use.","title":"Using the XRootD Client"},{"location":"data/xrootd/install-client/#validation","text":"Assuming that there is a file called test_file in your XRootD data store, you can do the following to validate your installation. Here we assume that there is a file on your XRootD system at /storage/path/test_file . user@client $ xrdcp xroot://redirector.yourdomain.org:1094//storage/path/test_file /tmp/test1","title":"Validation"},{"location":"data/xrootd/install-client/#using-xrootdfs-fuse-mount","text":"This section will explain how to install, setup, and interact with XRootD using a FUSE mount. This method of accessing XRootD only works when accessing a local XRootD system.","title":"Using XRootDFS FUSE mount"},{"location":"data/xrootd/install-client/#installing-the-xrootd-fuse-rpm","text":"If you are planning on using a FUSE mount, you'll need to install the xrootd-fuse rpm by running the following commands: Clean yum cache: root@client $ yum clean all --enablerepo = \\* Update software: root@client $ yum update Install XRootD FUSE rpm: root@client $ yum install xrootd-fuse","title":"Installing the XRootD FUSE RPM"},{"location":"data/xrootd/install-client/#configuring-the-fuse-mount","text":"Once the appropriate RPMs are installed, the FUSE setup will need further configuration. Modify /etc/fstab by adding the following entries: .... xrootdfs /mnt/xrootd fuse rdr=xroot://:1094/,uid=xrootd 0 0 Replace /mnt/xrootd with the path that you would like to access with. Create /mnt/xrootd directory. Make sure the xrootd user exists on the system. Once you are finished, you can mount it: mount /mnt/xrootd You should now be able to run UNIX commands such as ls /mnt/xrootd to see the contents of the XRootD server.","title":"Configuring the FUSE Mount"},{"location":"data/xrootd/install-client/#using-the-xrootdfs-fuse-mount","text":"The directory mounted using XRootDFS can be used as any other directory mounted on your file system. All the normal Unix commands should work out of the box. Try using cp , rm , mv , mkdir , rmdir . Assuming your mount is /mnt/xrootd : user@client $ echo \"This is a new test\" >/tmp/test user@client $ mkdir -p /mnt/xrootd/subdir/sub2 user@client $ cp /tmp/test /mnt/xrootd/subdir/sub2/test user@client $ cp /mnt/xrootd/subdir/sub2/test /mnt/xrootd/subdir/sub2/test1 user@client $ cp /mnt/xuserd/subdir/sub2/test1 /tmp/test1 user@client $ diff /tmp/test1 /tmp/test user@client $ rm -r /mnt/xrootd/subdir","title":"Using the XRootDFS FUSE Mount"},{"location":"data/xrootd/install-client/#validation_1","text":"Assuming your mount is /mnt/xrootd and that there is a file called test_file in your XRootD data store: user@client $ cp /mnt/xrootd/test_file /tmp/test1","title":"Validation"},{"location":"data/xrootd/install-client/#using-ld_preload-to-access-xrootd","text":"","title":"Using LD_PRELOAD to access XRootD"},{"location":"data/xrootd/install-client/#installing-xrootd-libraries-for-ld_preload","text":"In order to use LD_PRELOAD to access XRootD, you'll need to install the XRootD client libraries. The following steps will install them on your system: Clean yum cache: root@client $ yum clean all --enablerepo = \\* Update software: root@client $ yum update This command will update all packages Install XRootD Client rpm: root@client $ yum install xrootd-client","title":"Installing XRootD Libraries For LD_PRELOAD"},{"location":"data/xrootd/install-client/#using-ld_preload-method","text":"In order to use the LD_PRELOAD method to access a XRootD data store, you'll need to change your environment to use the XRootD libraries in conjunction with the standard Unix binaries. This is done by setting the LD_PRELOAD environment variable. Once this is done, the standard unix commands like mkdir , rm , cp , etc. will work with xroot URIs. For example: user@client $ export LD_PRELOAD = /usr/lib64/libXrdPosixPreload.so user@client $ echo \"This is a new test\" >/tmp/test user@client $ mkdir xroot://redirector.yourdomain.org:1094//storage/path/subdir user@client $ cp /tmp/test xroot://redirector.yourdomain.org:1094//storage/path/subdir/test user@client $ cp xuser://redirector.yourdomain.org:1094//storage/path/subdir/test /tmp/test1 user@client $ diff /tmp/test1 /tmp/test user@client $ rm xroot://redirector.yourdomain.org:1094//storage/path/subdir/test user@client $ rmdir xroot://redirector.yourdomain.org:1094//storage/path/subdir","title":"Using LD_PRELOAD method"},{"location":"data/xrootd/install-client/#validation_2","text":"Assuming that there is a file called test_file in your XRootD data store, the following steps will validate your installation: user@client $ export LD_PRELOAD = /usr/lib64/libXrdPosixPreload.so user@client $ cp xroot://redirector.yourdomain.org:1094//storage/path/test_file /tmp/test1","title":"Validation"},{"location":"data/xrootd/install-client/#how-to-get-help","text":"If you cannot resolve the problem, please consult this page for assistance..","title":"How to get Help?"},{"location":"data/xrootd/install-cms-xcache/","text":"Installing the CMS XCache \u00b6 This document describes how to install a CMS XCache. This service allows a site or regional network to cache data frequently used by the CMS experiment , reducing data transfer over the wide-area network and decreasing access latency. The are two types of installations described in this document: single or multinode cache. The difference might be based on the total disk that your cache needs. Before Starting \u00b6 Before starting the installation process, consider the following requirements: Operating system: A RHEL 7 or compatible operating systems. User IDs: If they do not exist already, the installation will create the Linux user IDs xrootd Host certificate: Required for client authentication and authentication with CMS VOMS Server See our documentation for instructions on how to request and install host certificates. Network ports: The cache service requires the following ports open: Inbound TCP port 1094 for file access via the XRootD protocol Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring Hardware requirements: We recommend that a cache has at least 10Gbps connectivity, 100TB of disk space for the whole cache (can be divided among several caches), and 8GB of RAM. As with all OSG software installations, there are some one-time steps to prepare in advance: Obtain root access to the host Prepare the required Yum repositories Install CA certificates Installing the Cache \u00b6 The CMS XCache ROM software consists of an XRootD server with special configuration and supporting services. To simplify installation, OSG provides convenience RPMs that install all required packages with a single command: root@host # yum install cms-xcache Configuring the Cache \u00b6 First, you must create a \"cache directory\", which will be used to store downloaded files. By default this is /mnt/stash . We recommend using a separate file system for the cache directory, with at least 1 TB of storage available. Note The cache directory must be writable by the xrootd:xrootd user and group. The cms-xcache package provides default configuration files in /etc/xrootd/xrootd-cms-xcache.cfg and /etc/xrootd/config.d/ . Administrators may provide additional configuration by placing files in /etc/xrootd/config.d/1*.cfg (for files that need to be processed BEFORE the OSG configuration) or /etc/xrootd/config.d/9*.cfg (for files that need to be processed AFTER the OSG configuration). You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg . The mandatory variables to configure are: set rootdir = /mnt/stash : the mounted filesystem path to export. This document refers to this as /mnt/stash . set resourcename = YOUR_RESOURCE_NAME : the resource name registered with the OSG for example (\"T2_US_UCSD\") Note XRootD can manage a set of independent disk for the cache. So you can modify file 90-cms-xcache-disks.cfg and add the disks there then rootdir just becomes a place to hold symlinks. Ensure the xrootd service has a certificate \u00b6 The service will need a certificate for reporting and to authenticate to CMS AAA. The easiest solution for this is to use your host certificate and key as follows: Copy the host certificate to /etc/grid-security/xrd/xrd{cert,key}.pem Set the owner of the directory and contents /etc/grid-security/xrd/ to xrootd:xrootd : root@host # chown -R xrootd:xrootd /etc/grid-security/xrd/ Note You must repeat the above steps whenever you renew your host certificate. If you automate certificate renewal, you should automate copying as well. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented here . Note You must also register this certificate with the CMS VOMS (https://voms2.cern.ch:8443/voms/cms/) Configuring Optional Features \u00b6 Adjust disk utilization \u00b6 To adjust the disk utilization of your cache, create or edit a file named /etc/xrootd/config.d/90-local.cfg and set the values of pfc.diskusage . pfc.diskusage 0.90 0.95 The two values correspond to the low and high usage water marks, respectively. When usage goes above the high water mark, the XRootD service will delete cached files until usage goes below the low water mark. Modify the storage access settings at a site \u00b6 In order for CMSSW jobs to use the cache at your site you need to modify the storage.xml and create the following rules # Portions of /store in xcache Note If you are installing a multinode cache then instead of yourlocalcache:1094 url should be changed for yourcacheredirector:2040 Enable remote debugging \u00b6 XRootD provides remote debugging via a read-only file system named digFS. This feature is disabled by default, but you may enable it if you need help troubleshooting your server. To enable remote debugging, edit /etc/xrootd/digauth.cfg and specify the authorizations for reading digFS. An example of authorizations: all allow gsi g=/glow h=*.cs.wisc.edu This gives access to the config file, log files, core files, and process information to anyone from *.cs.wisc.edu in the /glow VOMS group. See the XRootD manual for the full syntax. Remote debugging should only be enabled for as long as you need assistance. As soon as your issue has been resolved, revert any changes you have made to /etc/xrootd/digauth.cfg . Installing a Multinode Cache (optional) \u00b6 Some sites would like to have a single logical cache composed of several nodes as shown below: This can be achieved by following the next steps Install an XCache redirector \u00b6 This can be a simple lightweight virtual machine and will be the single point of contact from jobs to the caches. Install the redirector package root@host # yum install xcache-redirector Create file named /etc/xrootd/config.d/04-local-redir.cfg with contents: all.manager yourlocalredir:2041 You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg . The mandatory variables to configure are: set rootdir = /mnt/stash : the mounted filesystem path to export. This document refers to this as /mnt/stash . set resourcename = YOUR_RESOURCE_NAME : the resource name registered with the OSG for example (\"T2_US_UCSD\") Start and enable the cmsd and xrootd proccess: Software Service name Notes XRootD cmsd@xcache-redir.service The cmsd daemon that interact with the different xrootd servers XRootD xrootd@xcache-redir.service The xrootd daemon which performs authenticated data transfers Configuring each of your cache nodes \u00b6 Create a config file in the nodes where you installed your caches /etc/xrootd/config.d/94-xrootd-manager.cfg with the following contents: all.manager yourlocalredir:2041 Start and enable the cmsd service: Software Service name Notes XRootD cmsd@cms-xcache.service The xrootd daemon which performs authenticated data transfers Managing CMS XCache and associated services \u00b6 These services must be managed by systemctl and may start additional services as dependencies. As a reminder, here are common service commands (all run as root ) for EL7: To... On EL7, run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable CMS XCache services \u00b6 Software Service name Notes XRootD xrootd@cms-xcache.service The XRootD daemon, which performs the data transfers XRootD (Optional) cmsd@cms-xcache.service The cmsd daemon that interact with the different xrootd servers Fetch CRL EL8: fetch-crl.timer EL7: fetch-crl-boot and fetch-crl-cron Required to authenticate monitoring services. See CA documentation for more info xrootd-renew-proxy.service Renew a proxy for downloads to the cache xrootd-renew-proxy.timer Trigger daily proxy renewal XCache redirector services (Optional) \u00b6 In the node where the cache redirector is installed these are the list of services: Software Service name Notes XRootD (Optional) xrootd@xcache-redir.service The xrootd daemon which performs authenticated data transfers XRootD (Optional) cmsd@xcache-redir.service The xrootd daemon which performs authenticated data transfers Validating the Cache \u00b6 The cache server functions as a normal CMS XRootD server so first verify it with a personal CMS X.509 proxy: === VO cms extension information === VO : cms subject : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=efajardo/CN=722781/CN=Edgar Fajardo Hernandez issuer : /DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch attribute : /cms/Role=NULL/Capability=NULL attribute : /cms/uscms/Role=NULL/Capability=NULL timeleft : 71:59:46 uri : lcg-voms2.cern.ch:15002 Then test using xrdcp directly in your cache: user@host $ xrdcp -vf -d 1 root://cache_host:1094//store/data/Run2017B/SingleElectron/MINIAOD/31Mar2018-v1/60000/9E0F8458-EA37-E811-93F1-008CFAC919F0.root /dev/null Getting Help \u00b6 To get assistance, please use the this page .","title":"Install CMS XCache"},{"location":"data/xrootd/install-cms-xcache/#installing-the-cms-xcache","text":"This document describes how to install a CMS XCache. This service allows a site or regional network to cache data frequently used by the CMS experiment , reducing data transfer over the wide-area network and decreasing access latency. The are two types of installations described in this document: single or multinode cache. The difference might be based on the total disk that your cache needs.","title":"Installing the CMS XCache"},{"location":"data/xrootd/install-cms-xcache/#before-starting","text":"Before starting the installation process, consider the following requirements: Operating system: A RHEL 7 or compatible operating systems. User IDs: If they do not exist already, the installation will create the Linux user IDs xrootd Host certificate: Required for client authentication and authentication with CMS VOMS Server See our documentation for instructions on how to request and install host certificates. Network ports: The cache service requires the following ports open: Inbound TCP port 1094 for file access via the XRootD protocol Outbound UDP port 9930 for reporting to xrd-report.osgstorage.org and xrd-mon.osgstorage.org for monitoring Hardware requirements: We recommend that a cache has at least 10Gbps connectivity, 100TB of disk space for the whole cache (can be divided among several caches), and 8GB of RAM. As with all OSG software installations, there are some one-time steps to prepare in advance: Obtain root access to the host Prepare the required Yum repositories Install CA certificates","title":"Before Starting"},{"location":"data/xrootd/install-cms-xcache/#installing-the-cache","text":"The CMS XCache ROM software consists of an XRootD server with special configuration and supporting services. To simplify installation, OSG provides convenience RPMs that install all required packages with a single command: root@host # yum install cms-xcache","title":"Installing the Cache"},{"location":"data/xrootd/install-cms-xcache/#configuring-the-cache","text":"First, you must create a \"cache directory\", which will be used to store downloaded files. By default this is /mnt/stash . We recommend using a separate file system for the cache directory, with at least 1 TB of storage available. Note The cache directory must be writable by the xrootd:xrootd user and group. The cms-xcache package provides default configuration files in /etc/xrootd/xrootd-cms-xcache.cfg and /etc/xrootd/config.d/ . Administrators may provide additional configuration by placing files in /etc/xrootd/config.d/1*.cfg (for files that need to be processed BEFORE the OSG configuration) or /etc/xrootd/config.d/9*.cfg (for files that need to be processed AFTER the OSG configuration). You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg . The mandatory variables to configure are: set rootdir = /mnt/stash : the mounted filesystem path to export. This document refers to this as /mnt/stash . set resourcename = YOUR_RESOURCE_NAME : the resource name registered with the OSG for example (\"T2_US_UCSD\") Note XRootD can manage a set of independent disk for the cache. So you can modify file 90-cms-xcache-disks.cfg and add the disks there then rootdir just becomes a place to hold symlinks.","title":"Configuring the Cache"},{"location":"data/xrootd/install-cms-xcache/#ensure-the-xrootd-service-has-a-certificate","text":"The service will need a certificate for reporting and to authenticate to CMS AAA. The easiest solution for this is to use your host certificate and key as follows: Copy the host certificate to /etc/grid-security/xrd/xrd{cert,key}.pem Set the owner of the directory and contents /etc/grid-security/xrd/ to xrootd:xrootd : root@host # chown -R xrootd:xrootd /etc/grid-security/xrd/ Note You must repeat the above steps whenever you renew your host certificate. If you automate certificate renewal, you should automate copying as well. For example, if you are using Certbot for Let's Encrypt, you should write a \"deploy hook\" as documented here . Note You must also register this certificate with the CMS VOMS (https://voms2.cern.ch:8443/voms/cms/)","title":"Ensure the xrootd service has a certificate"},{"location":"data/xrootd/install-cms-xcache/#configuring-optional-features","text":"","title":"Configuring Optional Features"},{"location":"data/xrootd/install-cms-xcache/#adjust-disk-utilization","text":"To adjust the disk utilization of your cache, create or edit a file named /etc/xrootd/config.d/90-local.cfg and set the values of pfc.diskusage . pfc.diskusage 0.90 0.95 The two values correspond to the low and high usage water marks, respectively. When usage goes above the high water mark, the XRootD service will delete cached files until usage goes below the low water mark.","title":"Adjust disk utilization"},{"location":"data/xrootd/install-cms-xcache/#modify-the-storage-access-settings-at-a-site","text":"In order for CMSSW jobs to use the cache at your site you need to modify the storage.xml and create the following rules # Portions of /store in xcache Note If you are installing a multinode cache then instead of yourlocalcache:1094 url should be changed for yourcacheredirector:2040","title":"Modify the storage access settings at a site"},{"location":"data/xrootd/install-cms-xcache/#enable-remote-debugging","text":"XRootD provides remote debugging via a read-only file system named digFS. This feature is disabled by default, but you may enable it if you need help troubleshooting your server. To enable remote debugging, edit /etc/xrootd/digauth.cfg and specify the authorizations for reading digFS. An example of authorizations: all allow gsi g=/glow h=*.cs.wisc.edu This gives access to the config file, log files, core files, and process information to anyone from *.cs.wisc.edu in the /glow VOMS group. See the XRootD manual for the full syntax. Remote debugging should only be enabled for as long as you need assistance. As soon as your issue has been resolved, revert any changes you have made to /etc/xrootd/digauth.cfg .","title":"Enable remote debugging"},{"location":"data/xrootd/install-cms-xcache/#installing-a-multinode-cache-optional","text":"Some sites would like to have a single logical cache composed of several nodes as shown below: This can be achieved by following the next steps","title":"Installing a Multinode Cache (optional)"},{"location":"data/xrootd/install-cms-xcache/#install-an-xcache-redirector","text":"This can be a simple lightweight virtual machine and will be the single point of contact from jobs to the caches. Install the redirector package root@host # yum install xcache-redirector Create file named /etc/xrootd/config.d/04-local-redir.cfg with contents: all.manager yourlocalredir:2041 You must configure every variable in /etc/xrootd/config.d/10-common-site-local.cfg . The mandatory variables to configure are: set rootdir = /mnt/stash : the mounted filesystem path to export. This document refers to this as /mnt/stash . set resourcename = YOUR_RESOURCE_NAME : the resource name registered with the OSG for example (\"T2_US_UCSD\") Start and enable the cmsd and xrootd proccess: Software Service name Notes XRootD cmsd@xcache-redir.service The cmsd daemon that interact with the different xrootd servers XRootD xrootd@xcache-redir.service The xrootd daemon which performs authenticated data transfers","title":"Install an XCache redirector"},{"location":"data/xrootd/install-cms-xcache/#configuring-each-of-your-cache-nodes","text":"Create a config file in the nodes where you installed your caches /etc/xrootd/config.d/94-xrootd-manager.cfg with the following contents: all.manager yourlocalredir:2041 Start and enable the cmsd service: Software Service name Notes XRootD cmsd@cms-xcache.service The xrootd daemon which performs authenticated data transfers","title":"Configuring each of your cache nodes"},{"location":"data/xrootd/install-cms-xcache/#managing-cms-xcache-and-associated-services","text":"These services must be managed by systemctl and may start additional services as dependencies. As a reminder, here are common service commands (all run as root ) for EL7: To... On EL7, run the command... Start a service systemctl start Stop a service systemctl stop Enable a service to start on boot systemctl enable Disable a service from starting on boot systemctl disable ","title":"Managing CMS XCache and associated services"},{"location":"data/xrootd/install-cms-xcache/#cms-xcache-services","text":"Software Service name Notes XRootD xrootd@cms-xcache.service The XRootD daemon, which performs the data transfers XRootD (Optional) cmsd@cms-xcache.service The cmsd daemon that interact with the different xrootd servers Fetch CRL EL8: fetch-crl.timer EL7: fetch-crl-boot and fetch-crl-cron Required to authenticate monitoring services. See CA documentation for more info xrootd-renew-proxy.service Renew a proxy for downloads to the cache xrootd-renew-proxy.timer Trigger daily proxy renewal","title":"CMS XCache services"},{"location":"data/xrootd/install-cms-xcache/#xcache-redirector-services-optional","text":"In the node where the cache redirector is installed these are the list of services: Software Service name Notes XRootD (Optional) xrootd@xcache-redir.service The xrootd daemon which performs authenticated data transfers XRootD (Optional) cmsd@xcache-redir.service The xrootd daemon which performs authenticated data transfers","title":"XCache redirector services (Optional)"},{"location":"data/xrootd/install-cms-xcache/#validating-the-cache","text":"The cache server functions as a normal CMS XRootD server so first verify it with a personal CMS X.509 proxy: === VO cms extension information === VO : cms subject : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=efajardo/CN=722781/CN=Edgar Fajardo Hernandez issuer : /DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch attribute : /cms/Role=NULL/Capability=NULL attribute : /cms/uscms/Role=NULL/Capability=NULL timeleft : 71:59:46 uri : lcg-voms2.cern.ch:15002 Then test using xrdcp directly in your cache: user@host $ xrdcp -vf -d 1 root://cache_host:1094//store/data/Run2017B/SingleElectron/MINIAOD/31Mar2018-v1/60000/9E0F8458-EA37-E811-93F1-008CFAC919F0.root /dev/null","title":"Validating the Cache"},{"location":"data/xrootd/install-cms-xcache/#getting-help","text":"To get assistance, please use the this page .","title":"Getting Help"},{"location":"data/xrootd/install-shoveler/","text":"Installing the XRootD Monitoring Shoveler \u00b6 The XRootD Monitoring Shoveler is designed to accept the XRootD monitoring packets and \"shovel\" them to the OSG message bus. Shoveling is the act of moving messages from one medium to another. In this case, the shoveler is moving messages from a UDP stream to a message bus. graph LR subgraph Site subgraph Node 1 node1[XRootD] -- UDP --> shoveler1{Shoveler}; end subgraph Node 2 node2[XRootD] -- UDP --> shoveler1{Shoveler}; end end; subgraph OSG Operations shoveler1 -- TCP/TLS --> C[Message Bus]; C -- Raw --> D[XRootD Collector]; D -- Summary --> C; C -- Summary --> E[(Storage)]; style shoveler1 font-weight:bolder,stroke-width:4px,stroke:#E74C3C,font-size:4em,color:#E74C3C end; Installing the Shoveler \u00b6 The shoveler can be installed via RPM, container, or staticly compiled binary. Requirements for running the Shoveler \u00b6 An open port (configurable) that can receive UDP packets from the XRootD servers on the shoveler server. It does not need to be an open port to the internet, only open to the XRootD servers. Outgoing TCP connectivity on the shoveler host. A directory on the shoveler host to store the on-disk queue. Resource Requirements \u00b6 RAM : Production shovelers use less than 50MB of memory. Disk : If the shoveler is disconnected from the message bus, it will store the messages on disk until reconnected. Through testing, a disconnected shoveler with 12 busy XRootD servers will generate <30 MB of data a day on disk. CPU : A production shoveler will use 1-2% of a CPU, depending on how many XRootD servers are reporting to the shoveler. A shoveler with 12 busy XRootD servers reporting to it uses 1-2% of a CPU. Network : A production shoveler will receive UDP messages from XRootD servers and send them to a message bus. The incoming and outgoing network utilization will be the same. In testing, a shoveler will use <30MB of data a day on the network. Configuring the Shoveler \u00b6 Configuration can be specified with environment variables or a configuration file. The configuration file is in yaml . An example configuration file is distributed with the shoveler. In the RPM, the configuration file is located in /etc/xrootd-monitoring-shoveler/config.yaml . Below, we will break the configuration file into fragments but together they make a whole configuration file. Environment variables can be derived from the yaml. Every environment variable starts with SHOVELER_ , then continues with the structure of the configuration file. For example, the amqp url can be configured with the environment variable SHOVELER_AMQP_URL . The verify option can be configured with SHOVELER_VERIFY . Configuration Fragments \u00b6 AMQP Configuration \u00b6 AMQP configuration. For the OSG, the url should be amqps://clever-turkey.rmq.cloudamqp.com/xrd-mon . The exchange should is correct for the OSG. token_location is the path to the authentication token. # AMQP configuration amqp : url : amqps://username:password@example.com/vhost exchange : shoveled-xrd topic : token_location : /etc/xrootd-monitoring-shoveler/token Listening to UDP messages \u00b6 Where to listen for UDP messages from XRootD servers. listen : port : 9993 ip : 0.0.0.0 Verify packet header \u00b6 Whether to verify the header of the packet matches XRootD's monitoring packet format. verify : true Prometheus monitoring data \u00b6 Listening location of Prometheus metrics to view the performance and status of the shoveler in Prometheus format. # Export prometheus metrics metrics : enable : true port : 8000 Queue Configuration \u00b6 Directory to store overflow of queue onto disk. The queue keeps 100 messages in memory. If the shoveler is disconnected from the message bus, it will store messages over the 100 in memory onto disk into this directory. Once the connection has been re-established the queue will be emptied. The queue on disk is persistent between restarts. queue_directory : /tmp/shoveler-queue IP Mapping Configuration \u00b6 Mapping configuration (optional). If map.all is set, all messages will be mapped to the configured IP address. For example, with the above configuration, if a packet comes in with the private IP address of 192.168.0.4, the packet origin will be changed to 172.0.0.4. The port is always preserved. # map: # all: 172.0.0.4 If you want multiple mappings, you can specify multiple map entries. # map: # 192.168.0.5: 172.0.0.5 # 192.168.0.6: 129.93.10.7 Configuring Security \u00b6 A token is used to authenticate and authorize the shoveler with the message bus. The token is generated by the shoveler's lightweight issuer. Sequence of getting a token for the shoveler is shown below. sequenceDiagram User->>oidc-agent: Authenticate oidc-agent->>Issuer: Register agent Issuer->>oidc-agent: User Code oidc-agent->>User: User Code and URL User->>Issuer: Authenticate at URL oidc-agent->>Issuer: Get Token Get your unique CILogon User Identifier from CILogon . It is under User Attributes, and follows the pattern http://cilogon.org/serverA/users/12345. Open a ticket at help@osg-htc.org with your CILogon User Identifier to authorize your login with the renewer. Mention in the ticket that you want authorization for setting up a Shoveler. Install the OSG Token Renewal Service When installing, the issuer is https://lw-issuer.osgdev.chtc.io/scitokens-server/ When asked about scopes, accept the default. Follow through authentication the flow. In the configuration for the issuer, /etc/osg/token-renewer/config.ini , the token location must match the location of the token in the Shoveler configuration.","title":"Install XRootD Shoveler"},{"location":"data/xrootd/install-shoveler/#installing-the-xrootd-monitoring-shoveler","text":"The XRootD Monitoring Shoveler is designed to accept the XRootD monitoring packets and \"shovel\" them to the OSG message bus. Shoveling is the act of moving messages from one medium to another. In this case, the shoveler is moving messages from a UDP stream to a message bus. graph LR subgraph Site subgraph Node 1 node1[XRootD] -- UDP --> shoveler1{Shoveler}; end subgraph Node 2 node2[XRootD] -- UDP --> shoveler1{Shoveler}; end end; subgraph OSG Operations shoveler1 -- TCP/TLS --> C[Message Bus]; C -- Raw --> D[XRootD Collector]; D -- Summary --> C; C -- Summary --> E[(Storage)]; style shoveler1 font-weight:bolder,stroke-width:4px,stroke:#E74C3C,font-size:4em,color:#E74C3C end;","title":"Installing the XRootD Monitoring Shoveler"},{"location":"data/xrootd/install-shoveler/#installing-the-shoveler","text":"The shoveler can be installed via RPM, container, or staticly compiled binary.","title":"Installing the Shoveler"},{"location":"data/xrootd/install-shoveler/#requirements-for-running-the-shoveler","text":"An open port (configurable) that can receive UDP packets from the XRootD servers on the shoveler server. It does not need to be an open port to the internet, only open to the XRootD servers. Outgoing TCP connectivity on the shoveler host. A directory on the shoveler host to store the on-disk queue.","title":"Requirements for running the Shoveler"},{"location":"data/xrootd/install-shoveler/#resource-requirements","text":"RAM : Production shovelers use less than 50MB of memory. Disk : If the shoveler is disconnected from the message bus, it will store the messages on disk until reconnected. Through testing, a disconnected shoveler with 12 busy XRootD servers will generate <30 MB of data a day on disk. CPU : A production shoveler will use 1-2% of a CPU, depending on how many XRootD servers are reporting to the shoveler. A shoveler with 12 busy XRootD servers reporting to it uses 1-2% of a CPU. Network : A production shoveler will receive UDP messages from XRootD servers and send them to a message bus. The incoming and outgoing network utilization will be the same. In testing, a shoveler will use <30MB of data a day on the network.","title":"Resource Requirements"},{"location":"data/xrootd/install-shoveler/#configuring-the-shoveler","text":"Configuration can be specified with environment variables or a configuration file. The configuration file is in yaml . An example configuration file is distributed with the shoveler. In the RPM, the configuration file is located in /etc/xrootd-monitoring-shoveler/config.yaml . Below, we will break the configuration file into fragments but together they make a whole configuration file. Environment variables can be derived from the yaml. Every environment variable starts with SHOVELER_ , then continues with the structure of the configuration file. For example, the amqp url can be configured with the environment variable SHOVELER_AMQP_URL . The verify option can be configured with SHOVELER_VERIFY .","title":"Configuring the Shoveler"},{"location":"data/xrootd/install-shoveler/#configuration-fragments","text":"","title":"Configuration Fragments"},{"location":"data/xrootd/install-shoveler/#amqp-configuration","text":"AMQP configuration. For the OSG, the url should be amqps://clever-turkey.rmq.cloudamqp.com/xrd-mon . The exchange should is correct for the OSG. token_location is the path to the authentication token. # AMQP configuration amqp : url : amqps://username:password@example.com/vhost exchange : shoveled-xrd topic : token_location : /etc/xrootd-monitoring-shoveler/token","title":"AMQP Configuration"},{"location":"data/xrootd/install-shoveler/#listening-to-udp-messages","text":"Where to listen for UDP messages from XRootD servers. listen : port : 9993 ip : 0.0.0.0","title":"Listening to UDP messages"},{"location":"data/xrootd/install-shoveler/#verify-packet-header","text":"Whether to verify the header of the packet matches XRootD's monitoring packet format. verify : true","title":"Verify packet header"},{"location":"data/xrootd/install-shoveler/#prometheus-monitoring-data","text":"Listening location of Prometheus metrics to view the performance and status of the shoveler in Prometheus format. # Export prometheus metrics metrics : enable : true port : 8000","title":"Prometheus monitoring data"},{"location":"data/xrootd/install-shoveler/#queue-configuration","text":"Directory to store overflow of queue onto disk. The queue keeps 100 messages in memory. If the shoveler is disconnected from the message bus, it will store messages over the 100 in memory onto disk into this directory. Once the connection has been re-established the queue will be emptied. The queue on disk is persistent between restarts. queue_directory : /tmp/shoveler-queue","title":"Queue Configuration"},{"location":"data/xrootd/install-shoveler/#ip-mapping-configuration","text":"Mapping configuration (optional). If map.all is set, all messages will be mapped to the configured IP address. For example, with the above configuration, if a packet comes in with the private IP address of 192.168.0.4, the packet origin will be changed to 172.0.0.4. The port is always preserved. # map: # all: 172.0.0.4 If you want multiple mappings, you can specify multiple map entries. # map: # 192.168.0.5: 172.0.0.5 # 192.168.0.6: 129.93.10.7","title":"IP Mapping Configuration"},{"location":"data/xrootd/install-shoveler/#configuring-security","text":"A token is used to authenticate and authorize the shoveler with the message bus. The token is generated by the shoveler's lightweight issuer. Sequence of getting a token for the shoveler is shown below. sequenceDiagram User->>oidc-agent: Authenticate oidc-agent->>Issuer: Register agent Issuer->>oidc-agent: User Code oidc-agent->>User: User Code and URL User->>Issuer: Authenticate at URL oidc-agent->>Issuer: Get Token Get your unique CILogon User Identifier from CILogon . It is under User Attributes, and follows the pattern http://cilogon.org/serverA/users/12345. Open a ticket at help@osg-htc.org with your CILogon User Identifier to authorize your login with the renewer. Mention in the ticket that you want authorization for setting up a Shoveler. Install the OSG Token Renewal Service When installing, the issuer is https://lw-issuer.osgdev.chtc.io/scitokens-server/ When asked about scopes, accept the default. Follow through authentication the flow. In the configuration for the issuer, /etc/osg/token-renewer/config.ini , the token location must match the location of the token in the Shoveler configuration.","title":"Configuring Security"},{"location":"data/xrootd/install-standalone/","text":"Install XRootD Standalone \u00b6 XRootD is a hierarchical storage system that can be used in many ways to access data, typically distributed among actual storage resources. In its standalone configuration, XRootD acts as a simple layer exporting data from a storage system to the outside world. This document focuses on installing a default configuration of XRootD standalone that provides the following features: Supports any POSIX-based storage system Macaroons, X.509 proxy, and VOMS proxy authentication Third-Party Copy over HTTP (HTTP-TPC) Before Starting \u00b6 Before starting the installation process, consider the following points: User IDs: If it does not exist already, the installation will create the Linux user ID xrootd Service certificate: The XRootD service uses a host certificate and key pair at /etc/grid-security/xrd/xrdcert.pem and /etc/grid-security/xrd/xrdkey.pem that must be owned by the xrootd user Networking: The XRootD service uses port 1094 by default As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Install CA certificates Installing XRootD \u00b6 To install an XRootD Standalone server, run the following command: root@xrootd-standalone # yum install osg-xrootd-standalone Configuring XRootD \u00b6 To configure XRootD as a standalone server, you will modify /etc/xrootd/xrootd-standalone.cfg and the config files under /etc/xrootd/config.d/ as follows: Configure a rootdir in /etc/xrootd/config.d/10-common-site-local.cfg , to point to the top of the directory hierarchy which you wish to serve via XRootD. set rootdir = Carefully consider your rootdir Do not set rootdir to / . This might result in serving private information. If you want to limit the sub-directories to serve under your configured rootdir , comment out the all.export / directive in /etc/xrootd/config.d/90-osg-standalone-paths.cfg , and add an all.export directive for each directory under rootdir that you wish to serve via XRootD. This is useful if you have a mixture of files under your rootdir , for example from multiple users, but only want to expose a subset of them to the world. For example, to serve the contents of /data/store and /data/public (with rootdir configured to /data ): all.export /store/ all.export /public/ If you want to serve everything under your configured rootdir , you don't have to change anything. Danger The directories specified this way are writable by default. Access controls should be managed via authorization configuration . In /etc/xrootd/config.d/10-common-site-local.cfg , add a line to set the resourcename variable. Unless your supported VOs' policies state otherwise, this should match the resource name of your XRootD service. For example, the XRootD service registered at the University of Florida site should set the following configuration: set resourcename = UFlorida-XRD Configuring authentication and authorization \u00b6 XRootD offers several authentication options using security plugins to validate incoming credentials, such as bearer tokens, X.509 proxies, and VOMS proxies. Please follow the XRootD authorization documentation for instructions on how to configure authentication and authorization, including validating credentials and mapping them to users if desired. Optional configuration \u00b6 The following configuration steps are optional and will likely not be required for setting up a small site. If you do not need any of the following special configurations, skip to the section on using XRootD . Enabling multi-user support \u00b6 The xrootd-multiuser plugin allows XRootD to write files on the storage system as the authenticated user instead of the xrootd user. If your XRootD service only allows read-only access, you should skip installation of this plugin. To set up XRootD in multi-user mode, install the xrootd-multiuser package: root@xrootd-standalone # yum install xrootd-multiuser Note If you are using XRootD-Multiuser with a VOMS FQAN, you need XRootD 5.5.0 or greater. Throttling IO requests \u00b6 XRootD allows throttling of requests to the underlying filesystem. To enable this, In an /etc/xrootd/config.d/*.cfg file, e.g. /etc/xrootd/config.d/99-local.cfg , set the following configuration: xrootd.fslib throttle default throttle.throttle concurrency data Replacing with the IO concurrency limit, measured in seconds (e.g., 100 connections taking 1ms each, would be 0.1), and with the data rate limit in bytes per second. Note that you may also just specify either the concurrency limit: xrootd.fslib throttle default throttle.throttle concurrency Or the data rate limit: xrootd.fslib throttle default throttle.throttle data If XRootD is already running, restart the relevant XRootD service for your configuration to take effect. For more details of the throttling implementation, see the upstream documentation . Enabling CMS TFC support (CMS sites only) \u00b6 For CMS sites, there is a package available to integrate rule-based name lookup using a storage.xml file. If you are not setting up a service for CMS, skip this section. To install an xrootd-cmstfc , run the following command: root@xrootd-standalone # yum install --enablerepo = osg-contrib xrootd-cmstfc You will need to add your storage.xml to /etc/xrootd/storage.xml and then add the following line to your XRootD configuration: # Integrate with CMS TFC, placed in /etc/xrootd/storage.xml oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=hadoop Add the orange text only if you are running hadoop (see below). See the CMS TWiki for more information: https://twiki.cern.ch/twiki/bin/view/Main/XrootdTfcChanges https://twiki.cern.ch/twiki/bin/view/Main/HdfsXrootdInstall Using XRootD \u00b6 In addition to the XRootD service itself, there are a number of supporting services in your installation. The specific services are: Software Service Name Notes Fetch CRL EL8,EL9: fetch-crl.timer EL7: fetch-crl-boot and fetch-crl-cron See CA documentation for more info XRootD xrootd@standalone Primary xrootd service if not running in multi-user mode XRootD Multi-user xrootd-privileged@standalone Primary xrootd service to start instead of xrootd@standalone if running in multi-user mode Start the services in the order listed and stop them in reverse order. As a reminder, here are common service commands (all run as root ): To \u2026 Run the command\u2026 Start a service systemctl start SERVICE-NAME Stop a service systemctl stop SERVICE-NAME Enable a service to start during boot systemctl enable SERVICE-NAME Disable a service from starting during boot systemctl disable SERVICE-NAME Validating XRootD \u00b6 To validate an XRootD installation, perform the following verification steps: Note If you have configured authentication/authorization for XRootD, be sure you have given yourself the necessary permissions to run these tests. For example, if you are using an X.509 proxy, make sure your DN is mapped to a user in /etc/grid-security/grid-mapfile , make sure you have a valid proxy on your local machine, and ensure that the Authfile on the XRootD server gives write access to the mapped user from /etc/grid-security/grid-mapfile . Verify authorization of bearer tokens and/or proxies Verify HTTP-TPC using the same GFAL2 client tools: Requires gfal2 >= 2.20.0 gfal2-2.20.0 contains a fix for a bug affecting XRootD HTTP-TPC support. Copy a file from your XRootD standalone host to another host and path where you have write access: root@xrootd-standalone # gfal-copy davs://localhost:1094/ \\ / Replacing with the path to a file that you can read on your host relative to rootdir ; with the protocol, FQDN, and port of the remote storage host; and to a location on the remote storage host where you have write access. Copy a file from a remote host where you have read access to your XRootD standalone installation: root@xrootd-standalone # gfal-copy / \\ davs://localhost:1094/ Replacing with the protocol, FQDN, and port of the remote storage host; with the path to a file that you can read on the remote storage host; and to a location on the XRootD standalone host relative to rootdir where you have write access. Registering an XRootD Standalone Server \u00b6 To register your XRootD server, follow the general registration instructions here with the following XRootD-specific details: Add an XRootD component: section to the Services: list, with any relevant fields for that service. This is a partial example: ... FQDN: Services: XRootD component: Description: Standalone XRootD server ... Replacing with your XRootD server's DNS entry. If you are setting up a new resource, set Active: false . Only set Active: true for a resource when it is accepting requests and ready for production. Getting Help \u00b6 To get assistance. please use the Help Procedure page. Reference \u00b6 XRootD documentation Export directive in the XRootD configuration and relevant options Service Configuration \u00b6 The configuration that your XRootD service uses is determined by the service name given to systemctl . To use the standalone config, you would start XRootD with the following command: root@host # systemctl start xrootd@standalone File locations \u00b6 Service/Process Configuration File Description xrootd /etc/xrootd/xrootd-standalone.cfg Main XRootD configuration /etc/xrootd/config.d/ Drop-in configuration dir /etc/xrootd/auth_file Authorized users file Service/Process Log File Description xrootd /var/log/xrootd/standalone/xrootd.log XRootD server daemon log","title":"Install XRootD Standalone"},{"location":"data/xrootd/install-standalone/#install-xrootd-standalone","text":"XRootD is a hierarchical storage system that can be used in many ways to access data, typically distributed among actual storage resources. In its standalone configuration, XRootD acts as a simple layer exporting data from a storage system to the outside world. This document focuses on installing a default configuration of XRootD standalone that provides the following features: Supports any POSIX-based storage system Macaroons, X.509 proxy, and VOMS proxy authentication Third-Party Copy over HTTP (HTTP-TPC)","title":"Install XRootD Standalone"},{"location":"data/xrootd/install-standalone/#before-starting","text":"Before starting the installation process, consider the following points: User IDs: If it does not exist already, the installation will create the Linux user ID xrootd Service certificate: The XRootD service uses a host certificate and key pair at /etc/grid-security/xrd/xrdcert.pem and /etc/grid-security/xrd/xrdkey.pem that must be owned by the xrootd user Networking: The XRootD service uses port 1094 by default As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Install CA certificates","title":"Before Starting"},{"location":"data/xrootd/install-standalone/#installing-xrootd","text":"To install an XRootD Standalone server, run the following command: root@xrootd-standalone # yum install osg-xrootd-standalone","title":"Installing XRootD"},{"location":"data/xrootd/install-standalone/#configuring-xrootd","text":"To configure XRootD as a standalone server, you will modify /etc/xrootd/xrootd-standalone.cfg and the config files under /etc/xrootd/config.d/ as follows: Configure a rootdir in /etc/xrootd/config.d/10-common-site-local.cfg , to point to the top of the directory hierarchy which you wish to serve via XRootD. set rootdir = Carefully consider your rootdir Do not set rootdir to / . This might result in serving private information. If you want to limit the sub-directories to serve under your configured rootdir , comment out the all.export / directive in /etc/xrootd/config.d/90-osg-standalone-paths.cfg , and add an all.export directive for each directory under rootdir that you wish to serve via XRootD. This is useful if you have a mixture of files under your rootdir , for example from multiple users, but only want to expose a subset of them to the world. For example, to serve the contents of /data/store and /data/public (with rootdir configured to /data ): all.export /store/ all.export /public/ If you want to serve everything under your configured rootdir , you don't have to change anything. Danger The directories specified this way are writable by default. Access controls should be managed via authorization configuration . In /etc/xrootd/config.d/10-common-site-local.cfg , add a line to set the resourcename variable. Unless your supported VOs' policies state otherwise, this should match the resource name of your XRootD service. For example, the XRootD service registered at the University of Florida site should set the following configuration: set resourcename = UFlorida-XRD","title":"Configuring XRootD"},{"location":"data/xrootd/install-standalone/#configuring-authentication-and-authorization","text":"XRootD offers several authentication options using security plugins to validate incoming credentials, such as bearer tokens, X.509 proxies, and VOMS proxies. Please follow the XRootD authorization documentation for instructions on how to configure authentication and authorization, including validating credentials and mapping them to users if desired.","title":"Configuring authentication and authorization"},{"location":"data/xrootd/install-standalone/#optional-configuration","text":"The following configuration steps are optional and will likely not be required for setting up a small site. If you do not need any of the following special configurations, skip to the section on using XRootD .","title":"Optional configuration"},{"location":"data/xrootd/install-standalone/#enabling-multi-user-support","text":"The xrootd-multiuser plugin allows XRootD to write files on the storage system as the authenticated user instead of the xrootd user. If your XRootD service only allows read-only access, you should skip installation of this plugin. To set up XRootD in multi-user mode, install the xrootd-multiuser package: root@xrootd-standalone # yum install xrootd-multiuser Note If you are using XRootD-Multiuser with a VOMS FQAN, you need XRootD 5.5.0 or greater.","title":"Enabling multi-user support"},{"location":"data/xrootd/install-standalone/#throttling-io-requests","text":"XRootD allows throttling of requests to the underlying filesystem. To enable this, In an /etc/xrootd/config.d/*.cfg file, e.g. /etc/xrootd/config.d/99-local.cfg , set the following configuration: xrootd.fslib throttle default throttle.throttle concurrency data Replacing with the IO concurrency limit, measured in seconds (e.g., 100 connections taking 1ms each, would be 0.1), and with the data rate limit in bytes per second. Note that you may also just specify either the concurrency limit: xrootd.fslib throttle default throttle.throttle concurrency Or the data rate limit: xrootd.fslib throttle default throttle.throttle data If XRootD is already running, restart the relevant XRootD service for your configuration to take effect. For more details of the throttling implementation, see the upstream documentation .","title":"Throttling IO requests"},{"location":"data/xrootd/install-standalone/#enabling-cms-tfc-support-cms-sites-only","text":"For CMS sites, there is a package available to integrate rule-based name lookup using a storage.xml file. If you are not setting up a service for CMS, skip this section. To install an xrootd-cmstfc , run the following command: root@xrootd-standalone # yum install --enablerepo = osg-contrib xrootd-cmstfc You will need to add your storage.xml to /etc/xrootd/storage.xml and then add the following line to your XRootD configuration: # Integrate with CMS TFC, placed in /etc/xrootd/storage.xml oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=hadoop Add the orange text only if you are running hadoop (see below). See the CMS TWiki for more information: https://twiki.cern.ch/twiki/bin/view/Main/XrootdTfcChanges https://twiki.cern.ch/twiki/bin/view/Main/HdfsXrootdInstall","title":"Enabling CMS TFC support (CMS sites only)"},{"location":"data/xrootd/install-standalone/#using-xrootd","text":"In addition to the XRootD service itself, there are a number of supporting services in your installation. The specific services are: Software Service Name Notes Fetch CRL EL8,EL9: fetch-crl.timer EL7: fetch-crl-boot and fetch-crl-cron See CA documentation for more info XRootD xrootd@standalone Primary xrootd service if not running in multi-user mode XRootD Multi-user xrootd-privileged@standalone Primary xrootd service to start instead of xrootd@standalone if running in multi-user mode Start the services in the order listed and stop them in reverse order. As a reminder, here are common service commands (all run as root ): To \u2026 Run the command\u2026 Start a service systemctl start SERVICE-NAME Stop a service systemctl stop SERVICE-NAME Enable a service to start during boot systemctl enable SERVICE-NAME Disable a service from starting during boot systemctl disable SERVICE-NAME","title":"Using XRootD"},{"location":"data/xrootd/install-standalone/#validating-xrootd","text":"To validate an XRootD installation, perform the following verification steps: Note If you have configured authentication/authorization for XRootD, be sure you have given yourself the necessary permissions to run these tests. For example, if you are using an X.509 proxy, make sure your DN is mapped to a user in /etc/grid-security/grid-mapfile , make sure you have a valid proxy on your local machine, and ensure that the Authfile on the XRootD server gives write access to the mapped user from /etc/grid-security/grid-mapfile . Verify authorization of bearer tokens and/or proxies Verify HTTP-TPC using the same GFAL2 client tools: Requires gfal2 >= 2.20.0 gfal2-2.20.0 contains a fix for a bug affecting XRootD HTTP-TPC support. Copy a file from your XRootD standalone host to another host and path where you have write access: root@xrootd-standalone # gfal-copy davs://localhost:1094/ \\ / Replacing with the path to a file that you can read on your host relative to rootdir ; with the protocol, FQDN, and port of the remote storage host; and to a location on the remote storage host where you have write access. Copy a file from a remote host where you have read access to your XRootD standalone installation: root@xrootd-standalone # gfal-copy / \\ davs://localhost:1094/ Replacing with the protocol, FQDN, and port of the remote storage host; with the path to a file that you can read on the remote storage host; and to a location on the XRootD standalone host relative to rootdir where you have write access.","title":"Validating XRootD"},{"location":"data/xrootd/install-standalone/#registering-an-xrootd-standalone-server","text":"To register your XRootD server, follow the general registration instructions here with the following XRootD-specific details: Add an XRootD component: section to the Services: list, with any relevant fields for that service. This is a partial example: ... FQDN: Services: XRootD component: Description: Standalone XRootD server ... Replacing with your XRootD server's DNS entry. If you are setting up a new resource, set Active: false . Only set Active: true for a resource when it is accepting requests and ready for production.","title":"Registering an XRootD Standalone Server"},{"location":"data/xrootd/install-standalone/#getting-help","text":"To get assistance. please use the Help Procedure page.","title":"Getting Help"},{"location":"data/xrootd/install-standalone/#reference","text":"XRootD documentation Export directive in the XRootD configuration and relevant options","title":"Reference"},{"location":"data/xrootd/install-standalone/#service-configuration","text":"The configuration that your XRootD service uses is determined by the service name given to systemctl . To use the standalone config, you would start XRootD with the following command: root@host # systemctl start xrootd@standalone","title":"Service Configuration"},{"location":"data/xrootd/install-standalone/#file-locations","text":"Service/Process Configuration File Description xrootd /etc/xrootd/xrootd-standalone.cfg Main XRootD configuration /etc/xrootd/config.d/ Drop-in configuration dir /etc/xrootd/auth_file Authorized users file Service/Process Log File Description xrootd /var/log/xrootd/standalone/xrootd.log XRootD server daemon log","title":"File locations"},{"location":"data/xrootd/install-storage-element/","text":"Installing an XRootD Storage Element \u00b6 Warning This page is out of date and is not known to work with XRootD 5; parts of it do not work with EL 7+. XRootD is a hierarchical storage system that can be used in a variety of ways to access data, typically distributed among actual storage resources. One way to use XRootD is to have it refer to many data resources at a single site, and another way to use it is to refer to many storage systems, most likely distributed among sites. An XRootD system includes a redirector , which accepts requests for data and finds a storage repository \u2014 locally or otherwise \u2014 that can provide the data to the requestor. Use this page to learn how to install, configure, and use an XRootD redirector as part of a Storage Element (SE) or as part of a global namespace. Before Starting \u00b6 Before starting the installation process, consider the following points: User IDs: If it does not exist already, the installation will create the Linux user ID xrootd Service certificate: The XRootD service uses a host certificate at /etc/grid-security/host*.pem Networking: The XRootD service uses port 1094 by default As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Install CA certificates Installing an XRootD Server \u00b6 An installation of the XRootD server consists of the server itself and its dependencies. Install these with Yum: root@host # yum install osg-xrootd Configuring an XRootD Server \u00b6 An advanced XRootD setup has multiple components; it is important to validate that each additional component that you set up is working before moving on to the next component. We have included validation instructions after each component below. Creating an XRootD cluster \u00b6 If your storage is spread out over multiple hosts, you will need to set up an XRootD cluster . The cluster uses one \"redirector\" node as a frontend for user accesses, and multiple data nodes that have the data that users request. Two daemons will run on each node: xrootd The eXtended Root Daemon controls file access and storage. cmsd The Cluster Management Services Daemon controls communication between nodes. Note that for large virtual organizations, a site-level redirector may actually also communicate upwards to a regional or global redirector that handles access to a multi-level hierarchy. This section will only cover handling one level of XRootD hierarchy. In the instructions below, will refer to the redirector host and will refer to the data node host. These should be replaced with the fully-qualified domain name of the host in question. Modify /etc/xrootd/xrootd-clustered.cfg \u00b6 You will need to modify the xrootd-clustered.cfg on the redirector node and each data node. The following example should serve as a base configuration for clustering. Further customizations are detailed below. all.export /mnt/xrootd stage set xrdr = all.manager $(xrdr):3121 if $(xrdr) # Lines in this block are only executed on the redirector node all.role manager else # Lines in this block are executed on all nodes but the redirector node all.role server cms.space min 2g 5g fi You will need to customize the following lines: Configuration Line Changes Needed all.export /mnt/xrootd stage Change /mnt/xrootd to the directory to allow XRootD access to set xrdr= Change to the hostname of the redirector cms.space min 2g 5g Reserve this amount of free space on the node. For this example, if space falls below 2GB, xrootd will not store further files on this node until space climbs above 5GB. You can use k , m , g , or t to indicate kilobyte, megabytes, gigabytes, or terabytes, respectively. Further information can be found at https://xrootd.slac.stanford.edu/docs.html Verifying the clustered config \u00b6 Start both xrootd and cmsd on all nodes according to the instructions in the Using XRootD section . Verify that you can copy a file such as /bin/sh to /mnt/xrootd on the server data via the redirector: root@host # xrdcp /bin/sh root://:1094///mnt/xrootd/second_test [xrootd] Total 0.76 MB [====================] 100.00 % [inf MB/s] Check that the /mnt/xrootd/second_test is located on data server . (Optional) Adding High Availability (HA) redirectors \u00b6 It is possible to have an XRootD clustered setup with more than one redirector to ensure high availability service. To do this: In the /etc/xrootd/xrootd-clustered.cfg on each data node follow the instructions in this section with: set xrdr1 = set xrdr2 = all.manager $(xrdr1):3121 all.manager $(xrdr2):3121 Create DNS ALIAS records for pointing to and Advertise the FQDN to users interacting with the XRootD cluster should be . (Optional) Adding Simple Server Inventory to your cluster \u00b6 The Simple Server Inventory (SSI) provide means to have an inventory for each data server. SSI requires: A second instance of the xrootd daemon on the redirector A \"composite name space daemon\" ( XrdCnsd ) on each data server; this daemon handles the inventory As an example, we will set up a two-node XRootD cluster with SSI. Host A is a redirector node that is running the following daemons: xrootd redirector cmsd xrootd - second instance that required for SSI Host B is a data server that is running the following daemons: xrootd data server cmsd XrdCnsd - started automatically by xrootd We will need to create a directory on the redirector node for Inventory files. root@host # mkdir -p /data/inventory root@host # chown xrootd:xrootd /data/inventory On the data server (host B) let's use a storage cache that will be at a different location from /mnt/xrootd . root@host # mkdir -p /local/xrootd root@host # chown xrootd:xrootd /local/xrootd We will be running two instances of XRootD on . Modify /etc/xrootd/xrootd-clustered.cfg to give the two instances different behavior, as such: all.export /data/xrootdfs set xrdr= all.manager $(xrdr):3121 if $(xrdr) && named cns all.export /data/inventory xrd.port 1095 else if $(xrdr) all.role manager xrd.port 1094 else all.role server oss.localroot /local/xrootd ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory #add cms.space if you have less the 11GB # cms.space options https://xrootd.slac.stanford.edu/doc/dev410/cms_config.htm cms.space min 2g 5g fi The value of oss.localroot will be prepended to any file access. E.g. accessing root://:1094//data/xrootdfs/test1 will actually go to /local/xrootd/data/xrootdfs/test1 . Starting a second instance of XRootD \u00b6 Create a symlink pointing to /etc/xrootd/xrootd-clustered.cfg at /etc/xrootd/xrootd-cns.cfg : root@host # ln -s /etc/xrootd/xrootd-clustered.cfg /etc/xrootd/xrootd-cns.cfg Start an instance of the xrootd service named cns using the syntax in the managing services section : root@host # systemctl start xrootd@cns Testing an XRootD cluster with SSI \u00b6 Copy file to redirector node specifying storage path (/data/xrootdfs instead of /mnt/xrootd): root@host # xrdcp /bin/sh root://:1094//data/xrootdfs/test1 [xrootd] Total 0.00 MB [================] 100.00 % [inf MB/s] To verify that SSI is working execute cns_ssi command on the redirector node: root@host # cns_ssi list /data/inventory fermicloud054.fnal.gov incomplete inventory as of Mon Apr 11 17:28:11 2011 root@host # cns_ssi updt /data/inventory cns_ssi: fermicloud054.fnal.gov inventory with 1 directory and 1 file updated with 0 errors. root@host # cns_ssi list /data/inventory fermicloud054.fnal.gov complete inventory as of Tue Apr 12 07:38:29 2011 /data/xrootdfs/test1 Note : In this example, fermicloud53.fnal.gov is a redirector node and fermicloud054.fnal.gov is a data node. (Optional) Enabling Xrootd over HTTP \u00b6 XRootD can be accessed using the HTTP protocol. To do that: Add the following line to /etc/xrootd/config.d/10-common-site-local.cfg : set EnableHttp = 1 Testing the configuration From the terminal, generate a proxy and attempt to use davix-get to copy from your XRootD host (the XRootD service needs running; see the services section ). For example, if your server has a file named /store/user/test.root : davix-get https://:1094/store/user/test.root -E /mnt/xrootd/x509up_u`id -u` --capath /etc/grid-security/certificates Note For clients to successfully read from the regional redirector, HTTPS must be enabled for the data servers and the site-level redirector. Warning If you have u * in your Authfile, recall this provides an authorization to ALL users, including unauthenticated. This includes random web spiders! (Optional) Enable HTTP based Writes \u00b6 No changes to the HTTP module is needed to enable HTTP-based writes. The HTTP protocol uses the same authorization setup as the XRootD protocol. For example, you may need to provide a (all) style authorizations to allow users authorization to write. See the Authentication File section for more details. (Optional) Enabling a FUSE mount \u00b6 XRootD storage can be mounted as a standard POSIX filesystem via FUSE, providing users with a more familiar interface.. Modify /etc/fstab by adding the following entries: .... xrootdfs /mnt/xrootd fuse rdr=xroot://:1094/,uid=xrootd 0 0 Replace /mnt/xrootd with the path that you would like to access with. Create /mnt/xrootd directory. Make sure the xrootd user exists on the system. Once you are finished, you can mount it: mount /mnt/xrootd You should now be able to run UNIX commands such as ls /mnt/xrootd to see the contents of the XRootD server. (Optional) Authorization \u00b6 For information on how to configure XRootD authorization, please refer to the Configuring XRootD Authorization guide . (Optional) Adding CMS TFC support to XRootD (CMS sites only) \u00b6 For CMS users, there is a package available to integrate rule-based name lookup using a storage.xml file. See this documentation . (Optional) Adding Multi user support for an XRootd server \u00b6 For documentation how to enable multi-user support using XRootD see this documentation . (Optional) Adding File Residency Manager (FRM) to an XRootd cluster \u00b6 If you have a multi-tiered storage system (e.g. some data is stored on SSDs and some on disks or tapes), then install the File Residency Manager (FRM), so you can move data between tiers more easily. If you do not have a multi-tiered storage system, then you do not need FRM and you can skip this section. The FRM deals with two major mechanisms: local disk remote servers The description of fully functional multiple XRootD clusters is beyond the scope of this document. In order to have this fully functional system you will need a global redirector and at least one remote XRootD cluster from where files could be moved to the local cluster. Below are the modifications you should make in order to enable FRM on your local cluster: Make sure that FRM is enabled in /etc/sysconfig/xrootd on your data sever: ROOTD_USER=xrootd XROOTD_GROUP=xrootd XROOTD_DEFAULT_OPTIONS=\"-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg\" CMSD_DEFAULT_OPTIONS=\"-l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg\" FRMD_DEFAULT_OPTIONS=\"-l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg\" XROOTD_INSTANCES=\"default\" CMSD_INSTANCES=\"default\" FRMD_INSTANCES=\"default\" Modify /etc/xrootd/xrootd-clustered.cfg on both nodes to specify options for frm_xfrd (File Transfer Daemon) and frm_purged (File Purging Daemon). For more information, you can visit the FRM Documentation Start frm daemons on data server: root@host # service frm_xfrd start root@host # service frm_purged start Using XRootD \u00b6 Managing XRootD services \u00b6 Start services on the redirector node before starting any services on the data nodes. If you installed only XRootD itself, you will only need to start the xrootd service. However, if you installed cluster management services, you will need to start cmsd as well. XRootD determines which configuration to use based on the service name specified by systemctl . For example, to have xrootd use the clustered config, you would start up xrootd with this line: root@host # systemctl start xrootd@clustered To use the standalone config instead, you would use: root@host # systemctl start xrootd@standalone The services are: Service EL 7 & 8 service name XRootD (standalone config) xrootd@standalone XRootD (clustered config) xrootd@clustered XRootD (multiuser) xrootd-privileged@clustered CMSD (clustered config) cmsd@clustered As a reminder, here are common service commands (all run as root ): To ... On EL 7 & 8, run the command... Start a service systemctl start SERVICE-NAME Stop a service systemctl stop SERVICE-NAME Enable a service to start during boot systemctl enable SERVICE-NAME Disable a service from starting during boot systemctl disable SERVICE-NAME Getting Help \u00b6 To get assistance. please use the Help Procedure page. Reference \u00b6 File locations \u00b6 Service/Process Configuration File Description xrootd /etc/xrootd/xrootd-clustered.cfg Main clustered mode XRootD configuration /etc/xrootd/auth_file Authorized users file Service/Process Log File Description xrootd /var/log/xrootd/xrootd.log XRootD server daemon log cmsd /var/log/xrootd/cmsd.log Cluster management log cns /var/log/xrootd/cns/xrootd.log Server inventory (composite name space) log frm_xfrd , frm_purged /var/log/xrootd/frmd.log File Residency Manager log Links \u00b6 XRootD documentation","title":"Installing an XRootD Storage Element"},{"location":"data/xrootd/install-storage-element/#installing-an-xrootd-storage-element","text":"Warning This page is out of date and is not known to work with XRootD 5; parts of it do not work with EL 7+. XRootD is a hierarchical storage system that can be used in a variety of ways to access data, typically distributed among actual storage resources. One way to use XRootD is to have it refer to many data resources at a single site, and another way to use it is to refer to many storage systems, most likely distributed among sites. An XRootD system includes a redirector , which accepts requests for data and finds a storage repository \u2014 locally or otherwise \u2014 that can provide the data to the requestor. Use this page to learn how to install, configure, and use an XRootD redirector as part of a Storage Element (SE) or as part of a global namespace.","title":"Installing an XRootD Storage Element"},{"location":"data/xrootd/install-storage-element/#before-starting","text":"Before starting the installation process, consider the following points: User IDs: If it does not exist already, the installation will create the Linux user ID xrootd Service certificate: The XRootD service uses a host certificate at /etc/grid-security/host*.pem Networking: The XRootD service uses port 1094 by default As with all OSG software installations, there are some one-time (per host) steps to prepare in advance: Ensure the host has a supported operating system Obtain root access to the host Prepare the required Yum repositories Install CA certificates","title":"Before Starting"},{"location":"data/xrootd/install-storage-element/#installing-an-xrootd-server","text":"An installation of the XRootD server consists of the server itself and its dependencies. Install these with Yum: root@host # yum install osg-xrootd","title":"Installing an XRootD Server"},{"location":"data/xrootd/install-storage-element/#configuring-an-xrootd-server","text":"An advanced XRootD setup has multiple components; it is important to validate that each additional component that you set up is working before moving on to the next component. We have included validation instructions after each component below.","title":"Configuring an XRootD Server"},{"location":"data/xrootd/install-storage-element/#creating-an-xrootd-cluster","text":"If your storage is spread out over multiple hosts, you will need to set up an XRootD cluster . The cluster uses one \"redirector\" node as a frontend for user accesses, and multiple data nodes that have the data that users request. Two daemons will run on each node: xrootd The eXtended Root Daemon controls file access and storage. cmsd The Cluster Management Services Daemon controls communication between nodes. Note that for large virtual organizations, a site-level redirector may actually also communicate upwards to a regional or global redirector that handles access to a multi-level hierarchy. This section will only cover handling one level of XRootD hierarchy. In the instructions below, will refer to the redirector host and will refer to the data node host. These should be replaced with the fully-qualified domain name of the host in question.","title":"Creating an XRootD cluster"},{"location":"data/xrootd/install-storage-element/#modify-etcxrootdxrootd-clusteredcfg","text":"You will need to modify the xrootd-clustered.cfg on the redirector node and each data node. The following example should serve as a base configuration for clustering. Further customizations are detailed below. all.export /mnt/xrootd stage set xrdr = all.manager $(xrdr):3121 if $(xrdr) # Lines in this block are only executed on the redirector node all.role manager else # Lines in this block are executed on all nodes but the redirector node all.role server cms.space min 2g 5g fi You will need to customize the following lines: Configuration Line Changes Needed all.export /mnt/xrootd stage Change /mnt/xrootd to the directory to allow XRootD access to set xrdr= Change to the hostname of the redirector cms.space min 2g 5g Reserve this amount of free space on the node. For this example, if space falls below 2GB, xrootd will not store further files on this node until space climbs above 5GB. You can use k , m , g , or t to indicate kilobyte, megabytes, gigabytes, or terabytes, respectively. Further information can be found at https://xrootd.slac.stanford.edu/docs.html","title":"Modify /etc/xrootd/xrootd-clustered.cfg"},{"location":"data/xrootd/install-storage-element/#verifying-the-clustered-config","text":"Start both xrootd and cmsd on all nodes according to the instructions in the Using XRootD section . Verify that you can copy a file such as /bin/sh to /mnt/xrootd on the server data via the redirector: root@host # xrdcp /bin/sh root://:1094///mnt/xrootd/second_test [xrootd] Total 0.76 MB [====================] 100.00 % [inf MB/s] Check that the /mnt/xrootd/second_test is located on data server .","title":"Verifying the clustered config"},{"location":"data/xrootd/install-storage-element/#optional-adding-high-availability-ha-redirectors","text":"It is possible to have an XRootD clustered setup with more than one redirector to ensure high availability service. To do this: In the /etc/xrootd/xrootd-clustered.cfg on each data node follow the instructions in this section with: set xrdr1 = set xrdr2 = all.manager $(xrdr1):3121 all.manager $(xrdr2):3121 Create DNS ALIAS records for pointing to and Advertise the FQDN to users interacting with the XRootD cluster should be .","title":"(Optional) Adding High Availability (HA) redirectors"},{"location":"data/xrootd/install-storage-element/#optional-adding-simple-server-inventory-to-your-cluster","text":"The Simple Server Inventory (SSI) provide means to have an inventory for each data server. SSI requires: A second instance of the xrootd daemon on the redirector A \"composite name space daemon\" ( XrdCnsd ) on each data server; this daemon handles the inventory As an example, we will set up a two-node XRootD cluster with SSI. Host A is a redirector node that is running the following daemons: xrootd redirector cmsd xrootd - second instance that required for SSI Host B is a data server that is running the following daemons: xrootd data server cmsd XrdCnsd - started automatically by xrootd We will need to create a directory on the redirector node for Inventory files. root@host # mkdir -p /data/inventory root@host # chown xrootd:xrootd /data/inventory On the data server (host B) let's use a storage cache that will be at a different location from /mnt/xrootd . root@host # mkdir -p /local/xrootd root@host # chown xrootd:xrootd /local/xrootd We will be running two instances of XRootD on . Modify /etc/xrootd/xrootd-clustered.cfg to give the two instances different behavior, as such: all.export /data/xrootdfs set xrdr= all.manager $(xrdr):3121 if $(xrdr) && named cns all.export /data/inventory xrd.port 1095 else if $(xrdr) all.role manager xrd.port 1094 else all.role server oss.localroot /local/xrootd ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory #add cms.space if you have less the 11GB # cms.space options https://xrootd.slac.stanford.edu/doc/dev410/cms_config.htm cms.space min 2g 5g fi The value of oss.localroot will be prepended to any file access. E.g. accessing root://:1094//data/xrootdfs/test1 will actually go to /local/xrootd/data/xrootdfs/test1 .","title":"(Optional) Adding Simple Server Inventory to your cluster"},{"location":"data/xrootd/install-storage-element/#starting-a-second-instance-of-xrootd","text":"Create a symlink pointing to /etc/xrootd/xrootd-clustered.cfg at /etc/xrootd/xrootd-cns.cfg : root@host # ln -s /etc/xrootd/xrootd-clustered.cfg /etc/xrootd/xrootd-cns.cfg Start an instance of the xrootd service named cns using the syntax in the managing services section : root@host # systemctl start xrootd@cns","title":"Starting a second instance of XRootD"},{"location":"data/xrootd/install-storage-element/#testing-an-xrootd-cluster-with-ssi","text":"Copy file to redirector node specifying storage path (/data/xrootdfs instead of /mnt/xrootd): root@host # xrdcp /bin/sh root://:1094//data/xrootdfs/test1 [xrootd] Total 0.00 MB [================] 100.00 % [inf MB/s] To verify that SSI is working execute cns_ssi command on the redirector node: root@host # cns_ssi list /data/inventory fermicloud054.fnal.gov incomplete inventory as of Mon Apr 11 17:28:11 2011 root@host # cns_ssi updt /data/inventory cns_ssi: fermicloud054.fnal.gov inventory with 1 directory and 1 file updated with 0 errors. root@host # cns_ssi list /data/inventory fermicloud054.fnal.gov complete inventory as of Tue Apr 12 07:38:29 2011 /data/xrootdfs/test1 Note : In this example, fermicloud53.fnal.gov is a redirector node and fermicloud054.fnal.gov is a data node.","title":"Testing an XRootD cluster with SSI"},{"location":"data/xrootd/install-storage-element/#optional-enabling-xrootd-over-http","text":"XRootD can be accessed using the HTTP protocol. To do that: Add the following line to /etc/xrootd/config.d/10-common-site-local.cfg : set EnableHttp = 1 Testing the configuration From the terminal, generate a proxy and attempt to use davix-get to copy from your XRootD host (the XRootD service needs running; see the services section ). For example, if your server has a file named /store/user/test.root : davix-get https://:1094/store/user/test.root -E /mnt/xrootd/x509up_u`id -u` --capath /etc/grid-security/certificates Note For clients to successfully read from the regional redirector, HTTPS must be enabled for the data servers and the site-level redirector. Warning If you have u * in your Authfile, recall this provides an authorization to ALL users, including unauthenticated. This includes random web spiders!","title":"(Optional) Enabling Xrootd over HTTP"},{"location":"data/xrootd/install-storage-element/#optional-enable-http-based-writes","text":"No changes to the HTTP module is needed to enable HTTP-based writes. The HTTP protocol uses the same authorization setup as the XRootD protocol. For example, you may need to provide a (all) style authorizations to allow users authorization to write. See the Authentication File section for more details.","title":"(Optional) Enable HTTP based Writes"},{"location":"data/xrootd/install-storage-element/#optional-enabling-a-fuse-mount","text":"XRootD storage can be mounted as a standard POSIX filesystem via FUSE, providing users with a more familiar interface.. Modify /etc/fstab by adding the following entries: .... xrootdfs /mnt/xrootd fuse rdr=xroot://:1094/,uid=xrootd 0 0 Replace /mnt/xrootd with the path that you would like to access with. Create /mnt/xrootd directory. Make sure the xrootd user exists on the system. Once you are finished, you can mount it: mount /mnt/xrootd You should now be able to run UNIX commands such as ls /mnt/xrootd to see the contents of the XRootD server.","title":"(Optional) Enabling a FUSE mount"},{"location":"data/xrootd/install-storage-element/#optional-authorization","text":"For information on how to configure XRootD authorization, please refer to the Configuring XRootD Authorization guide .","title":"(Optional) Authorization"},{"location":"data/xrootd/install-storage-element/#optional-adding-cms-tfc-support-to-xrootd-cms-sites-only","text":"For CMS users, there is a package available to integrate rule-based name lookup using a storage.xml file. See this documentation .","title":"(Optional) Adding CMS TFC support to XRootD (CMS sites only)"},{"location":"data/xrootd/install-storage-element/#optional-adding-multi-user-support-for-an-xrootd-server","text":"For documentation how to enable multi-user support using XRootD see this documentation .","title":"(Optional) Adding Multi user support for an XRootd server"},{"location":"data/xrootd/install-storage-element/#optional-adding-file-residency-manager-frm-to-an-xrootd-cluster","text":"If you have a multi-tiered storage system (e.g. some data is stored on SSDs and some on disks or tapes), then install the File Residency Manager (FRM), so you can move data between tiers more easily. If you do not have a multi-tiered storage system, then you do not need FRM and you can skip this section. The FRM deals with two major mechanisms: local disk remote servers The description of fully functional multiple XRootD clusters is beyond the scope of this document. In order to have this fully functional system you will need a global redirector and at least one remote XRootD cluster from where files could be moved to the local cluster. Below are the modifications you should make in order to enable FRM on your local cluster: Make sure that FRM is enabled in /etc/sysconfig/xrootd on your data sever: ROOTD_USER=xrootd XROOTD_GROUP=xrootd XROOTD_DEFAULT_OPTIONS=\"-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg\" CMSD_DEFAULT_OPTIONS=\"-l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg\" FRMD_DEFAULT_OPTIONS=\"-l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg\" XROOTD_INSTANCES=\"default\" CMSD_INSTANCES=\"default\" FRMD_INSTANCES=\"default\" Modify /etc/xrootd/xrootd-clustered.cfg on both nodes to specify options for frm_xfrd (File Transfer Daemon) and frm_purged (File Purging Daemon). For more information, you can visit the FRM Documentation Start frm daemons on data server: root@host # service frm_xfrd start root@host # service frm_purged start","title":"(Optional) Adding File Residency Manager (FRM) to an XRootd cluster"},{"location":"data/xrootd/install-storage-element/#using-xrootd","text":"","title":"Using XRootD"},{"location":"data/xrootd/install-storage-element/#managing-xrootd-services","text":"Start services on the redirector node before starting any services on the data nodes. If you installed only XRootD itself, you will only need to start the xrootd service. However, if you installed cluster management services, you will need to start cmsd as well. XRootD determines which configuration to use based on the service name specified by systemctl . For example, to have xrootd use the clustered config, you would start up xrootd with this line: root@host # systemctl start xrootd@clustered To use the standalone config instead, you would use: root@host # systemctl start xrootd@standalone The services are: Service EL 7 & 8 service name XRootD (standalone config) xrootd@standalone XRootD (clustered config) xrootd@clustered XRootD (multiuser) xrootd-privileged@clustered CMSD (clustered config) cmsd@clustered As a reminder, here are common service commands (all run as root ): To ... On EL 7 & 8, run the command... Start a service systemctl start SERVICE-NAME Stop a service systemctl stop SERVICE-NAME Enable a service to start during boot systemctl enable SERVICE-NAME Disable a service from starting during boot systemctl disable SERVICE-NAME","title":"Managing XRootD services"},{"location":"data/xrootd/install-storage-element/#getting-help","text":"To get assistance. please use the Help Procedure page.","title":"Getting Help"},{"location":"data/xrootd/install-storage-element/#reference","text":"","title":"Reference"},{"location":"data/xrootd/install-storage-element/#file-locations","text":"Service/Process Configuration File Description xrootd /etc/xrootd/xrootd-clustered.cfg Main clustered mode XRootD configuration /etc/xrootd/auth_file Authorized users file Service/Process Log File Description xrootd /var/log/xrootd/xrootd.log XRootD server daemon log cmsd /var/log/xrootd/cmsd.log Cluster management log cns /var/log/xrootd/cns/xrootd.log Server inventory (composite name space) log frm_xfrd , frm_purged /var/log/xrootd/frmd.log File Residency Manager log","title":"File locations"},{"location":"data/xrootd/install-storage-element/#links","text":"XRootD documentation","title":"Links"},{"location":"data/xrootd/overview/","text":"XRootD Overview \u00b6 XRootD is a highly-configurable data server used by sites in the OSG to support VO-specific storage needs. The software can be used to create an export of an existing file system through multiple protocols, participate in a data federation, or act as a caching service. XRootD data servers can stream data directly to client applications or support experiment-wide data management by performing bulk data transfer via \"third-party-copy\" between distinct sites. The OSG supports multiple different configurations of XRootD: XCache \u00b6 Previously known as the \"XRootD proxy cache\", XCache provides a caching service for data federations that serve one or more VOs. If your site contributes large amounts of computing resources to the OSG, a site XCache could be part of a solution to help reduce incoming WAN usage. In the OSG, there are three data federations based on XCache: ATLAS XCache, CMS XCache, and StashCache for all other VOs. If you are affiliated with a site or VO interested in contributing to a data federation, contact us at help@osg-htc.org . XRootD Standalone \u00b6 An XRootD standalone server exports data from an existing network storage solution, such as HDFS or Lustre, using both the XRootD and WebDAV protocols. Generally, only sites affiliated with large VOs would need to install an XRootD standalone server so consult your VO if you are interested in contributing storage.","title":"XRootD Overview"},{"location":"data/xrootd/overview/#xrootd-overview","text":"XRootD is a highly-configurable data server used by sites in the OSG to support VO-specific storage needs. The software can be used to create an export of an existing file system through multiple protocols, participate in a data federation, or act as a caching service. XRootD data servers can stream data directly to client applications or support experiment-wide data management by performing bulk data transfer via \"third-party-copy\" between distinct sites. The OSG supports multiple different configurations of XRootD:","title":"XRootD Overview"},{"location":"data/xrootd/overview/#xcache","text":"Previously known as the \"XRootD proxy cache\", XCache provides a caching service for data federations that serve one or more VOs. If your site contributes large amounts of computing resources to the OSG, a site XCache could be part of a solution to help reduce incoming WAN usage. In the OSG, there are three data federations based on XCache: ATLAS XCache, CMS XCache, and StashCache for all other VOs. If you are affiliated with a site or VO interested in contributing to a data federation, contact us at help@osg-htc.org .","title":"XCache"},{"location":"data/xrootd/overview/#xrootd-standalone","text":"An XRootD standalone server exports data from an existing network storage solution, such as HDFS or Lustre, using both the XRootD and WebDAV protocols. Generally, only sites affiliated with large VOs would need to install an XRootD standalone server so consult your VO if you are interested in contributing storage.","title":"XRootD Standalone"},{"location":"data/xrootd/xrootd-authorization/","text":"Configuring XRootD Authorization \u00b6 XRootD offers several authentication options using security plugins to validate incoming credentials, such as bearer tokens, X.509 proxies, and VOMS proxies. In the case of X.509 and VOMS proxies, after the incoming credential has been mapped to a username or groupname, the authorization database is used to provide fine-grained file access. Note On data nodes, files will be owned by Unix user xrootd (or other daemon user), not as the user authenticated to, under most circumstances. XRootD will verify the permissions and authorization based on the user that the security plugin authenticates you to, but, internally, the data node files will be owned by the xrootd user. If this behaviour is not desired, enable XRootD multi-user support . Authorizing Bearer Tokens \u00b6 XRootD supports authorization of bearer tokens such as macaroons, SciTokens, or WLCG tokens. Encoded in the bearer tokens themselves are information about the files that they should have read/write access to and in the case of SciTokens and WLCG tokens, you may configure XRootD to further restrict access. Configuring SciTokens/WLCG Tokens \u00b6 SciTokens and WLCG Tokens are asymmetrically signed bearer tokens: they are signed by a token issuer (e.g., CILogon, IAM) and can be verified with the token issuer's public key. To configure XRootD to accept tokens from a given token issuer use the following instructions: Add a section for each token issuer to /etc/xrootd/scitokens.conf : [Issuer ] issuer = base_path = Replacing with a descriptive name, with the token issuer URL, and base_path to a path relative to rootdir that the client should be restricted to accessing. (Optional) if you want to map the incoming token for a given issuer to a Unix username: Install xrootd-multiuser Add the following to the relevant issuer section in /etc/xrootd/scitokens.conf : map_subject = True (Optional) if you want to only accept tokens with the appropriate aud field, add the following to /etc/xrootd/scitokens.conf : [Global] audience = An example configuration that supports tokens issued by the OSG Connect and CMS: [Global] audience = https://testserver.example.com/, MySite [Issuer OSG-Connect] issuer = https://scitokens.org/osg-connect base_path = /stash map_subject = True [Issuer CMS] issuer = https://scitokens.org/cms base_path = /user/cms Configuring macaroons \u00b6 Macaroons are symetrically signed bearer tokens so your XRootD host must have access to the same secret key that is used to sign incoming macaroons. When used in an XRootD cluster, all data nodes and the redirector need access to the same secret. To enable macaroon support: Place the shared secret in /etc/xrootd/macaroon-secret Ensure that it has the appropriate file ownership and permissions: root@host # chown xrootd:xrootd /etc/xrootd/macaroon-secret root@host # chmod 0600 /etc/xrootd/macaroon-secret Authorizing X.509 proxies \u00b6 Authenticating proxies \u00b6 Authorizations for proxy-based security are declared in an XRootD authorization database file . XRootD authentication plugins are used to provide the mappings that are used in the database. Starting with OSG 3.6 , DN mappings are performed with XRootD's built-in GSI support, and FQAN mappings are with the XRootD-VOMS ( XrdVoms ) plugin. To enable proxy authentication, edit /etc/xrootd/config.d/10-osg-xrdvoms.cfg and add or uncomment the line set EnableVoms = 1 Note Proxy authentication is already enabled in XRootD Standalone , so this step is not necessary there. Requirements for XRootD-Multiuser with VOMS FQANs Using XRootD-Multiuser with a VOMS FQAN requires XRootD 5.5.0 or newer. Key length requirements Servers on EL 8 or newer will reject proxies that are not at least 2048 bits long. Ensure your clients' proxies have at least 2048 bits long with voms-proxy-info ; if necessary, have them add the argument -bits 2048 to their voms-proxy-init calls. Mapping subject DNs \u00b6 DN mappings take precedence over VOMS attributes If you have mapped the subject Distinguished Name (DN) of an incoming proxy with VOMS attributes, XRootD will map it to a username. X.509 proxies are mapped using the built-in XRootD GSI plug-in. To map an incoming proxy's subject DN to an XRootD username , add lines of the following format to /etc/grid-security/grid-mapfile : \"\" Replacing with the X.509 proxy's DN to map and with the username to reference in the authorization database . For example, the following mapping: \"/DC=org/DC=cilogon/C=US/O=University of Wisconsin-Madison/CN=Brian Lin A2266246\" blin Will result in the username blin , i.e. authorize access to clients presenting the above proxy with u blin ... in the authorization database. Mapping VOMS attributes \u00b6 Requirements for XRootD-Multiuser with VOMS FQANs Using XRootD-Multiuser with a VOMS FQAN requires XRootD 5.5.0 or newer. If the XRootD-VOMS plugin is enabled, an incoming VOMS proxy will authenticate the first VOMS FQAN and map it to an organization name ( o ), groupname ( g ), and role name ( r ) in the authorization database . For example, a proxy from the OSPool whose first VOMS FQAN is /osg/Role=NULL/Capability=NULL will be authenticated to the /osg groupname; note that the / is included in the groupname. Instead of only using the first VOMS FQAN, you can configure XRootD to consider all VOMS FQANs in the proxy for authentication by setting the following in /etc/xrootd/config.d/10-osg-xrdvoms.cfg : set vomsfqans = useall Mapping VOMS attributes to users \u00b6 In order for the XRootD-Multiuser plugin to work, a proxy must be mapped to a user ( u ) that is a valid Unix user. Use a VOMS Mapfile, conventionally in /etc/grid-security/voms-mapfile that contains lines in the following form: \"\" replacing with a glob matching FQANs, and with the user that you want to map matching FQANs to. For example, \"/osg/*\" osg01 will map FQANs starting with /osg/ to the user osg01 . To enable using VOMS mapfiles in the first place, add the following line to your XRootD configuration: voms.mapfile /etc/grid-security/voms-mapfile replacing /etc/grid-security/voms-mapfile with the actual location of your mapfile, if it is different. Note A VOMS Mapfile only affects mapping the user ( u ) attribute understood in the authorization-database . The FQAN will always be used for the groupname ( g ), organization name ( o ), and role name ( r ), even if the mapfile is missing or does not contain a matching mapping. See the VOMS Mapping documentation for details. VOMS Mapfiles previously used with LCMAPS should continue to work unmodified, but the plugin can only look at a single mapfile, so if you are using the mappings provided in /usr/share/osg/voms-mapfile-default (by the vo-client-lcmaps-voms package), you will have to copy them to /etc/grid-security/voms-mapfile . Authorization database \u00b6 XRootD allows configuring fine-grained file access permissions based on authenticated identities and paths. This is configured in the authorization file /etc/xrootd/Authfile , which should be writable only by the xrootd user, optionally readable by others. Here is an example /etc/xrootd/Authfile : # This means that all the users have read access to the datasets, _except_ under /private u * /private -rl rl # Or the following, without a restricted /private dir # u * rl # This means that all the users have full access to their private home dirs u = /home/@=/ a # This means that the privileged 'xrootd' user can do everything # There must be at least one such user in order to create the # private dirs for users willing to store their data in the facility u xrootd a # This means that OSPool clients presenting a VOMS proxy can do anything under the 'osg' directory g /osg /osg a Replacing with the path to the directory that will contain data served by XRootD, e.g. /data/xrootdfs . This path is relative to the rootdir . Configure most to least specific paths Specific paths need to be specified before generic paths. For example, this line will allow all users to read the contents /data/xrootdfs/private : u * /data/xrootdfs rl /data/xrootdfs/private -rl Instead, specify the following to ensure that a given user will not be able to read the contents of /data/xrootdfs/private unless specified with another authorization rule: u * /data/xrootdfs/private -rl /data/xrootdfs rl Formatting \u00b6 More generally, each authorization rule of the authorization database has the following form: idtype id path privs Field Description idtype Type of id. Use u for username, g for groupname, o for organization name, r for role name, etc. id ID name, e.g. username or groupname. Use * for all users or = for user-specific capabilities, like home directories path The path prefix to be used for matching purposes. @= expands to the current user name before a path prefix match is attempted privs Letter list of privileges: a - all ; l - lookup ; d - delete ; n - rename ; i - insert ; r - read ; k - lock (not used) ; w - write ; - - prefix to remove specified privileges For more details or examples on how to use templated user options, see XRootD authorization database . Verifying file ownership and permissions \u00b6 Ensure the authorization datbase file is owned by xrootd (if you have created file as root), and that it is not writable by others. root@host # chown xrootd:xrootd /etc/xrootd/Authfile root@host # chmod 0640 /etc/xrootd/Authfile # or 0644 Multiuser and the authorization database \u00b6 The XRootD-Multiuser plugin can be used to perform file system operations as a different user than the XRootD daemon (whose user is xrootd ). If it is enabled, then after authorization is done using the authorization database, XRootD will take the user ( u ) attribute of the incoming request, and perform file operations as the Unix user with the same name as that attribute. Note If there is no Unix user with a matching name, you will see an error like XRootD mapped request to username that does not exist: ; the operation will then fail with \"EACCES\" (access denied). Applying Authorization Changes \u00b6 After making changes to your authorization database , you must restart the relevant services . Verifying XRootD Authorization \u00b6 Bearer tokens \u00b6 To test read access using macaroon, SciTokens, and WLCG token authorization, run the following command: user@host $ curl -v \\ -H 'Authorization: Bearer ' \\ https://host.example.com//path/to/directory/hello_world Replacing with the contents of your encoded token, host.example.com with the target XRootD host, and /path/to/directory/hello_world with the path of the file to read. To test write access, using macaroon, SciTokens, and WLCG token authorization, run the following command: user@host $ curl -v \\ -X PUT \\ --upload-file \\ -H 'Authorization: Bearer ' \\ https://host.example.com//path/to/directory/hello_world Replacing with the contents of your encoded token, with the file to write to the XRootD host, host.example.com with the target XRootD host, and /path/to/directory/hello_world with the path of the file to write. X.509 and VOMS proxies \u00b6 To verify X.509 and VOMS proxy authorization, run the following commands from a machine with your user certificate/key pair, xrootd-client , and voms-clients-cpp installed: Destroy any pre-existing proxies and attempt a copy to a directory (which we will refer to as ) on the to verify failure: user@client $ voms-proxy-destroy user@client $ xrdcp /bin/bash root:/// 180213 13:56:49 396570 cryptossl_X509CreateProxy: EEC certificate has expired [0B/0B][100%][==================================================][0B/s] Run: [FATAL] Auth failed On the XRootD host, add your DN to /etc/grid-security/grid-mapfile Add a line to the authorization database to ensure the mapped user can write to Restart the relevant XRootD services. See this section for details Generate your proxy and verify that you can successfully transfer files: user@client $ voms-proxy-init user@client $ xrdcp /bin/sh root:/// [938.1kB/938.1kB][100%][==================================================][938.1kB/s] If your transfer does not succeed, re-run xrdcp with --debug 2 for more information. Updating to OSG 23 \u00b6 There are no manual steps necessary for authentication to work when upgrading from OSG 3.6 to OSG 23. If you are upgrading from an earlier release series, see the updating to OSG 3.6 section below. Updating to OSG 3.6 \u00b6 There are some manual steps that need to be taken for authentication to work in OSG 3.6. Ensure OSG XRootD packages are fully up-to-date \u00b6 Some authentication configuration is provided by OSG packaging. Old versions of the packages may result in broken configuration. It is best if your packages match the versions in the appropriate release subdirectories of https://repo.opensciencegrid.org/osg/3.6/ , but at the very least these should be true: xrootd >= 5.4 xrootd-multiuser >= 2 (if using multiuser) xrootd-scitokens >= 5.4 (if using SciTokens/WLCG Tokens) xrootd-voms >= 5.4.2-1.1 (if using VOMS auth) osg-xrootd >= 3.6 osg-xrootd-standalone >= 3.6 (if installed) xcache >= 3 (if using xcache-derived software such as stash-cache, stash-origin, atlas-xcache, or cms-xcache) SciToken auth \u00b6 Updating from XRootD 4 (OSG 3.5 without 3.5-upcoming) \u00b6 The config syntax for adding auth plugins has changed between XRootD 4 and XRootD 5. Replace ofs.authlib libXrdAccSciTokens.so ... with ofs.authlib ++ libXrdAccSciTokens.so ... Updating from XRootD 5 (OSG 3.5 with 3.5-upcoming) \u00b6 No config changes are necessary. Proxy auth: transitioning from XrdLcmaps to XrdVoms \u00b6 In OSG 3.5 and previous, proxy authentication was handled by the XrdLcmaps plugin, provided in the xrootd-lcmaps RPM. This is no longer the case in OSG 3.6; instead it is handled by the XrdVoms plugin, provided in the xrootd-voms RPM. To continue using proxy authentication, update your configuration and your authorization database (Authfile) as described below. Updating XRootD configuration \u00b6 Remove any old config in /etc/xrootd and /etc/xrootd/config.d that mentions LCMAPS or libXrdLcmaps.so , otherwise XRootD may fail to start. If you do not have both an unauthenticated stash-cache and an authenticated stash-cache on the same server, uncomment set EnableVoms = 1 in /etc/xrootd/config.d/10-osg-xrdvoms.cfg . If you have both an an authenticated stash-cache and an unauthenticated stash-cache on the same server, add the following block to /etc/xrootd/config.d/10-osg-xrdvoms.cfg : if named stash-cache-auth set EnableVoms = 1 fi If you are using XRootD Multiuser, create a VOMS Mapfile at /etc/grid-security/voms-mapfile , with the syntax described above , then add voms.mapfile /etc/grid-security/voms-mapfile to your XRootD config if it's not already present. Note In order to make yum update easier, xrootd-lcmaps has been replaced with an empty package, which can be removed after upgrading. Updating your authorization database \u00b6 Unlike the XrdLcmaps plugin, which mapped VOMS FQANs to users u , the XrdVoms plugin maps FQANs to groups g , roles r , and organizations o , as described in the mapping VOMS attributes section . You can still use a VOMS mapfile but if you want to use the mappings provided at /usr/share/osg/voms-mapfile-default by the vo-client-lcmaps-voms package, you must copy them to /etc/grid-security/voms-mapfile . Replace mappings based on users with mappings based on the other attributes. For example, instead of u uscmslocal /uscms rl use g /cms/uscms /uscms rl If you need to make a mapping based on group and role, create and use a \"compound ID\" as described in the XRootD security documentation . # create the ID named \"cmsprod\" = cmsprod g /cms r Production # use it x cmsprod /cmsprod rl","title":"Configure Authorization"},{"location":"data/xrootd/xrootd-authorization/#configuring-xrootd-authorization","text":"XRootD offers several authentication options using security plugins to validate incoming credentials, such as bearer tokens, X.509 proxies, and VOMS proxies. In the case of X.509 and VOMS proxies, after the incoming credential has been mapped to a username or groupname, the authorization database is used to provide fine-grained file access. Note On data nodes, files will be owned by Unix user xrootd (or other daemon user), not as the user authenticated to, under most circumstances. XRootD will verify the permissions and authorization based on the user that the security plugin authenticates you to, but, internally, the data node files will be owned by the xrootd user. If this behaviour is not desired, enable XRootD multi-user support .","title":"Configuring XRootD Authorization"},{"location":"data/xrootd/xrootd-authorization/#authorizing-bearer-tokens","text":"XRootD supports authorization of bearer tokens such as macaroons, SciTokens, or WLCG tokens. Encoded in the bearer tokens themselves are information about the files that they should have read/write access to and in the case of SciTokens and WLCG tokens, you may configure XRootD to further restrict access.","title":"Authorizing Bearer Tokens"},{"location":"data/xrootd/xrootd-authorization/#configuring-scitokenswlcg-tokens","text":"SciTokens and WLCG Tokens are asymmetrically signed bearer tokens: they are signed by a token issuer (e.g., CILogon, IAM) and can be verified with the token issuer's public key. To configure XRootD to accept tokens from a given token issuer use the following instructions: Add a section for each token issuer to /etc/xrootd/scitokens.conf : [Issuer ] issuer = base_path = Replacing with a descriptive name, with the token issuer URL, and base_path to a path relative to rootdir that the client should be restricted to accessing. (Optional) if you want to map the incoming token for a given issuer to a Unix username: Install xrootd-multiuser Add the following to the relevant issuer section in /etc/xrootd/scitokens.conf : map_subject = True (Optional) if you want to only accept tokens with the appropriate aud field, add the following to /etc/xrootd/scitokens.conf : [Global] audience = An example configuration that supports tokens issued by the OSG Connect and CMS: [Global] audience = https://testserver.example.com/, MySite [Issuer OSG-Connect] issuer = https://scitokens.org/osg-connect base_path = /stash map_subject = True [Issuer CMS] issuer = https://scitokens.org/cms base_path = /user/cms","title":"Configuring SciTokens/WLCG Tokens"},{"location":"data/xrootd/xrootd-authorization/#configuring-macaroons","text":"Macaroons are symetrically signed bearer tokens so your XRootD host must have access to the same secret key that is used to sign incoming macaroons. When used in an XRootD cluster, all data nodes and the redirector need access to the same secret. To enable macaroon support: Place the shared secret in /etc/xrootd/macaroon-secret Ensure that it has the appropriate file ownership and permissions: root@host # chown xrootd:xrootd /etc/xrootd/macaroon-secret root@host # chmod 0600 /etc/xrootd/macaroon-secret","title":"Configuring macaroons"},{"location":"data/xrootd/xrootd-authorization/#authorizing-x509-proxies","text":"","title":"Authorizing X.509 proxies"},{"location":"data/xrootd/xrootd-authorization/#authenticating-proxies","text":"Authorizations for proxy-based security are declared in an XRootD authorization database file . XRootD authentication plugins are used to provide the mappings that are used in the database. Starting with OSG 3.6 , DN mappings are performed with XRootD's built-in GSI support, and FQAN mappings are with the XRootD-VOMS ( XrdVoms ) plugin. To enable proxy authentication, edit /etc/xrootd/config.d/10-osg-xrdvoms.cfg and add or uncomment the line set EnableVoms = 1 Note Proxy authentication is already enabled in XRootD Standalone , so this step is not necessary there. Requirements for XRootD-Multiuser with VOMS FQANs Using XRootD-Multiuser with a VOMS FQAN requires XRootD 5.5.0 or newer. Key length requirements Servers on EL 8 or newer will reject proxies that are not at least 2048 bits long. Ensure your clients' proxies have at least 2048 bits long with voms-proxy-info ; if necessary, have them add the argument -bits 2048 to their voms-proxy-init calls.","title":"Authenticating proxies"},{"location":"data/xrootd/xrootd-authorization/#mapping-subject-dns","text":"DN mappings take precedence over VOMS attributes If you have mapped the subject Distinguished Name (DN) of an incoming proxy with VOMS attributes, XRootD will map it to a username. X.509 proxies are mapped using the built-in XRootD GSI plug-in. To map an incoming proxy's subject DN to an XRootD username , add lines of the following format to /etc/grid-security/grid-mapfile : \"\" Replacing with the X.509 proxy's DN to map and with the username to reference in the authorization database . For example, the following mapping: \"/DC=org/DC=cilogon/C=US/O=University of Wisconsin-Madison/CN=Brian Lin A2266246\" blin Will result in the username blin , i.e. authorize access to clients presenting the above proxy with u blin ... in the authorization database.","title":"Mapping subject DNs"},{"location":"data/xrootd/xrootd-authorization/#mapping-voms-attributes","text":"Requirements for XRootD-Multiuser with VOMS FQANs Using XRootD-Multiuser with a VOMS FQAN requires XRootD 5.5.0 or newer. If the XRootD-VOMS plugin is enabled, an incoming VOMS proxy will authenticate the first VOMS FQAN and map it to an organization name ( o ), groupname ( g ), and role name ( r ) in the authorization database . For example, a proxy from the OSPool whose first VOMS FQAN is /osg/Role=NULL/Capability=NULL will be authenticated to the /osg groupname; note that the / is included in the groupname. Instead of only using the first VOMS FQAN, you can configure XRootD to consider all VOMS FQANs in the proxy for authentication by setting the following in /etc/xrootd/config.d/10-osg-xrdvoms.cfg : set vomsfqans = useall","title":"Mapping VOMS attributes"},{"location":"data/xrootd/xrootd-authorization/#mapping-voms-attributes-to-users","text":"In order for the XRootD-Multiuser plugin to work, a proxy must be mapped to a user ( u ) that is a valid Unix user. Use a VOMS Mapfile, conventionally in /etc/grid-security/voms-mapfile that contains lines in the following form: \"\" replacing with a glob matching FQANs, and with the user that you want to map matching FQANs to. For example, \"/osg/*\" osg01 will map FQANs starting with /osg/ to the user osg01 . To enable using VOMS mapfiles in the first place, add the following line to your XRootD configuration: voms.mapfile /etc/grid-security/voms-mapfile replacing /etc/grid-security/voms-mapfile with the actual location of your mapfile, if it is different. Note A VOMS Mapfile only affects mapping the user ( u ) attribute understood in the authorization-database . The FQAN will always be used for the groupname ( g ), organization name ( o ), and role name ( r ), even if the mapfile is missing or does not contain a matching mapping. See the VOMS Mapping documentation for details. VOMS Mapfiles previously used with LCMAPS should continue to work unmodified, but the plugin can only look at a single mapfile, so if you are using the mappings provided in /usr/share/osg/voms-mapfile-default (by the vo-client-lcmaps-voms package), you will have to copy them to /etc/grid-security/voms-mapfile .","title":"Mapping VOMS attributes to users"},{"location":"data/xrootd/xrootd-authorization/#authorization-database","text":"XRootD allows configuring fine-grained file access permissions based on authenticated identities and paths. This is configured in the authorization file /etc/xrootd/Authfile , which should be writable only by the xrootd user, optionally readable by others. Here is an example /etc/xrootd/Authfile : # This means that all the users have read access to the datasets, _except_ under /private u * /private -rl rl # Or the following, without a restricted /private dir # u * rl # This means that all the users have full access to their private home dirs u = /home/@=/ a # This means that the privileged 'xrootd' user can do everything # There must be at least one such user in order to create the # private dirs for users willing to store their data in the facility u xrootd a # This means that OSPool clients presenting a VOMS proxy can do anything under the 'osg' directory g /osg /osg a Replacing with the path to the directory that will contain data served by XRootD, e.g. /data/xrootdfs . This path is relative to the rootdir . Configure most to least specific paths Specific paths need to be specified before generic paths. For example, this line will allow all users to read the contents /data/xrootdfs/private : u * /data/xrootdfs rl /data/xrootdfs/private -rl Instead, specify the following to ensure that a given user will not be able to read the contents of /data/xrootdfs/private unless specified with another authorization rule: u * /data/xrootdfs/private -rl /data/xrootdfs rl","title":"Authorization database"},{"location":"data/xrootd/xrootd-authorization/#formatting","text":"More generally, each authorization rule of the authorization database has the following form: idtype id path privs Field Description idtype Type of id. Use u for username, g for groupname, o for organization name, r for role name, etc. id ID name, e.g. username or groupname. Use * for all users or = for user-specific capabilities, like home directories path The path prefix to be used for matching purposes. @= expands to the current user name before a path prefix match is attempted privs Letter list of privileges: a - all ; l - lookup ; d - delete ; n - rename ; i - insert ; r - read ; k - lock (not used) ; w - write ; - - prefix to remove specified privileges For more details or examples on how to use templated user options, see XRootD authorization database .","title":"Formatting"},{"location":"data/xrootd/xrootd-authorization/#verifying-file-ownership-and-permissions","text":"Ensure the authorization datbase file is owned by xrootd (if you have created file as root), and that it is not writable by others. root@host # chown xrootd:xrootd /etc/xrootd/Authfile root@host # chmod 0640 /etc/xrootd/Authfile # or 0644","title":"Verifying file ownership and permissions"},{"location":"data/xrootd/xrootd-authorization/#multiuser-and-the-authorization-database","text":"The XRootD-Multiuser plugin can be used to perform file system operations as a different user than the XRootD daemon (whose user is xrootd ). If it is enabled, then after authorization is done using the authorization database, XRootD will take the user ( u ) attribute of the incoming request, and perform file operations as the Unix user with the same name as that attribute. Note If there is no Unix user with a matching name, you will see an error like XRootD mapped request to username that does not exist: ; the operation will then fail with \"EACCES\" (access denied).","title":"Multiuser and the authorization database"},{"location":"data/xrootd/xrootd-authorization/#applying-authorization-changes","text":"After making changes to your authorization database , you must restart the relevant services .","title":"Applying Authorization Changes"},{"location":"data/xrootd/xrootd-authorization/#verifying-xrootd-authorization","text":"","title":"Verifying XRootD Authorization"},{"location":"data/xrootd/xrootd-authorization/#bearer-tokens","text":"To test read access using macaroon, SciTokens, and WLCG token authorization, run the following command: user@host $ curl -v \\ -H 'Authorization: Bearer ' \\ https://host.example.com//path/to/directory/hello_world Replacing with the contents of your encoded token, host.example.com with the target XRootD host, and /path/to/directory/hello_world with the path of the file to read. To test write access, using macaroon, SciTokens, and WLCG token authorization, run the following command: user@host $ curl -v \\ -X PUT \\ --upload-file \\ -H 'Authorization: Bearer ' \\ https://host.example.com//path/to/directory/hello_world Replacing with the contents of your encoded token, with the file to write to the XRootD host, host.example.com with the target XRootD host, and /path/to/directory/hello_world with the path of the file to write.","title":"Bearer tokens"},{"location":"data/xrootd/xrootd-authorization/#x509-and-voms-proxies","text":"To verify X.509 and VOMS proxy authorization, run the following commands from a machine with your user certificate/key pair, xrootd-client , and voms-clients-cpp installed: Destroy any pre-existing proxies and attempt a copy to a directory (which we will refer to as ) on the to verify failure: user@client $ voms-proxy-destroy user@client $ xrdcp /bin/bash root:/// 180213 13:56:49 396570 cryptossl_X509CreateProxy: EEC certificate has expired [0B/0B][100%][==================================================][0B/s] Run: [FATAL] Auth failed On the XRootD host, add your DN to /etc/grid-security/grid-mapfile Add a line to the authorization database to ensure the mapped user can write to Restart the relevant XRootD services. See this section for details Generate your proxy and verify that you can successfully transfer files: user@client $ voms-proxy-init user@client $ xrdcp /bin/sh root:/// [938.1kB/938.1kB][100%][==================================================][938.1kB/s] If your transfer does not succeed, re-run xrdcp with --debug 2 for more information.","title":"X.509 and VOMS proxies"},{"location":"data/xrootd/xrootd-authorization/#updating-to-osg-23","text":"There are no manual steps necessary for authentication to work when upgrading from OSG 3.6 to OSG 23. If you are upgrading from an earlier release series, see the updating to OSG 3.6 section below.","title":"Updating to OSG 23"},{"location":"data/xrootd/xrootd-authorization/#updating-to-osg-36","text":"There are some manual steps that need to be taken for authentication to work in OSG 3.6.","title":"Updating to OSG 3.6"},{"location":"data/xrootd/xrootd-authorization/#ensure-osg-xrootd-packages-are-fully-up-to-date","text":"Some authentication configuration is provided by OSG packaging. Old versions of the packages may result in broken configuration. It is best if your packages match the versions in the appropriate release subdirectories of https://repo.opensciencegrid.org/osg/3.6/ , but at the very least these should be true: xrootd >= 5.4 xrootd-multiuser >= 2 (if using multiuser) xrootd-scitokens >= 5.4 (if using SciTokens/WLCG Tokens) xrootd-voms >= 5.4.2-1.1 (if using VOMS auth) osg-xrootd >= 3.6 osg-xrootd-standalone >= 3.6 (if installed) xcache >= 3 (if using xcache-derived software such as stash-cache, stash-origin, atlas-xcache, or cms-xcache)","title":"Ensure OSG XRootD packages are fully up-to-date"},{"location":"data/xrootd/xrootd-authorization/#scitoken-auth","text":"","title":"SciToken auth"},{"location":"data/xrootd/xrootd-authorization/#updating-from-xrootd-4-osg-35-without-35-upcoming","text":"The config syntax for adding auth plugins has changed between XRootD 4 and XRootD 5. Replace ofs.authlib libXrdAccSciTokens.so ... with ofs.authlib ++ libXrdAccSciTokens.so ...","title":"Updating from XRootD 4 (OSG 3.5 without 3.5-upcoming)"},{"location":"data/xrootd/xrootd-authorization/#updating-from-xrootd-5-osg-35-with-35-upcoming","text":"No config changes are necessary.","title":"Updating from XRootD 5 (OSG 3.5 with 3.5-upcoming)"},{"location":"data/xrootd/xrootd-authorization/#proxy-auth-transitioning-from-xrdlcmaps-to-xrdvoms","text":"In OSG 3.5 and previous, proxy authentication was handled by the XrdLcmaps plugin, provided in the xrootd-lcmaps RPM. This is no longer the case in OSG 3.6; instead it is handled by the XrdVoms plugin, provided in the xrootd-voms RPM. To continue using proxy authentication, update your configuration and your authorization database (Authfile) as described below.","title":"Proxy auth: transitioning from XrdLcmaps to XrdVoms"},{"location":"data/xrootd/xrootd-authorization/#updating-xrootd-configuration","text":"Remove any old config in /etc/xrootd and /etc/xrootd/config.d that mentions LCMAPS or libXrdLcmaps.so , otherwise XRootD may fail to start. If you do not have both an unauthenticated stash-cache and an authenticated stash-cache on the same server, uncomment set EnableVoms = 1 in /etc/xrootd/config.d/10-osg-xrdvoms.cfg . If you have both an an authenticated stash-cache and an unauthenticated stash-cache on the same server, add the following block to /etc/xrootd/config.d/10-osg-xrdvoms.cfg : if named stash-cache-auth set EnableVoms = 1 fi If you are using XRootD Multiuser, create a VOMS Mapfile at /etc/grid-security/voms-mapfile , with the syntax described above , then add voms.mapfile /etc/grid-security/voms-mapfile to your XRootD config if it's not already present. Note In order to make yum update easier, xrootd-lcmaps has been replaced with an empty package, which can be removed after upgrading.","title":"Updating XRootD configuration"},{"location":"data/xrootd/xrootd-authorization/#updating-your-authorization-database","text":"Unlike the XrdLcmaps plugin, which mapped VOMS FQANs to users u , the XrdVoms plugin maps FQANs to groups g , roles r , and organizations o , as described in the mapping VOMS attributes section . You can still use a VOMS mapfile but if you want to use the mappings provided at /usr/share/osg/voms-mapfile-default by the vo-client-lcmaps-voms package, you must copy them to /etc/grid-security/voms-mapfile . Replace mappings based on users with mappings based on the other attributes. For example, instead of u uscmslocal /uscms rl use g /cms/uscms /uscms rl If you need to make a mapping based on group and role, create and use a \"compound ID\" as described in the XRootD security documentation . # create the ID named \"cmsprod\" = cmsprod g /cms r Production # use it x cmsprod /cmsprod rl","title":"Updating your authorization database"},{"location":"other/configuration-with-osg-configure/","text":"Configuration with OSG-Configure \u00b6 OSG-Configure and the INI files in /etc/osg/config.d allow a high level configuration of OSG services. This document outlines the settings and options found in the INI files for system administers that are installing and configuring OSG software. This page gives an overview of the options for each of the sections of the configuration files that osg-configure uses. Invocation and script usage \u00b6 The osg-configure script is used to process the INI files and apply changes to the system. osg-configure must be run as root. The typical workflow of OSG-Configure is to first edit the INI files, then verify them, then apply the changes. To verify the config files, run: [root@server] osg-configure -v OSG-Configure will list any errors in your configuration, usually including the section and option where the problem is. Potential problems are: Required option not filled in Invalid value Syntax error Inconsistencies between options To apply changes, run: [root@server] osg-configure -c If your INI files do not change, then re-running osg-configure -c will result in the same configuration as when you ran it the last time. This allows you to experiment with your settings without having to worry about messing up your system. OSG-Configure is split up into modules. Normally, all modules are run when calling osg-configure . However, it is possible to run specific modules separately. To see a list of modules, including whether they can be run separately, run: [root@server] osg-configure -l If the module can be run separately, specify it with the -m option, where is one of the items of the output of the previous command. [root@server] osg-configure -c -m Options may be specified in multiple INI files, which may make it hard to determine which value OSG-Configure uses. You may query the final value of an option via one of these methods: [root@server] osg-configure -q -o