Skip to content

Commit

Permalink
Documentation (#146)
Browse files Browse the repository at this point in the history
* Expand Troubelshooting section in doc, and put workarounds for DIRAC for Mac users in the same place.

* Add a quick documentation on DIRAC

* Expand doc on DIRAC

---------

Co-authored-by: Jean-Philippe Lenain <[email protected]>
  • Loading branch information
jlenain and jlenain authored Aug 2, 2024
1 parent 48804b4 commit 939c858
Show file tree
Hide file tree
Showing 4 changed files with 151 additions and 17 deletions.
15 changes: 2 additions & 13 deletions docs/developer-guide/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,19 +55,8 @@ To enable support for DIRAC within the same environment, do the following after
$ dirac-configure
Some Mac OS users (running on M1 chip) may experience a ``M2Crypto.SSL.SSLError`` error when trying to initiate a DIRAC proxy with ``dirac-proxy-init``. Instead of:

.. code-block:: console
$ mamba install dirac-grid
one may try:

.. code-block:: console
$ mamba install dirac-grid "voms=2.1.0rc2=h7a71a8a_7"
or the container alternative as explained in :ref:`troubleshooting`.
Some Mac OS users (running on M1 or M2 CPU chips) may experience issues with DIRAC,
please refer to :ref:`note_mac_users`.


Building the documentation
Expand Down
105 changes: 105 additions & 0 deletions docs/user-guide/howto-dirac.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
.. _dirac:

How to use DIRAC
----------------

NectarCAM data are stored on the `EGI <https://www.egi.eu/>`_ grid using `CTA-DIRAC <https://redmine.cta-observatory.org/projects/cta_dirac/wiki/CTA-DIRAC_Users_Guide>`_.

Starting with DIRAC
===================

To start interacting with DIRAC, one needs to initialize a proxy, with the ``cta_nectarcam`` role enabled:

.. code-block:: console
$ dirac-proxy-init -M -g cta_nectarcam
DIRAC commands are quite long and can be tedious to learn and handle. Two main families of DIRAC commands which are useful are:

* ``dirac-dms-<XXX>`` which interact with the Data Management System (i.e. data storage);
* ``dirac-wms-<XXX>`` which interact with the Workload Management System (i.e. to submit and interact with jobs on the grid).

Many more details can be found in the `CTA-DIRAC users guide <https://redmine.cta-observatory.org/projects/cta_dirac/wiki/CTA-DIRAC_Users_Guide>`_.

How to explore NectarCAM data on DIRAC
======================================

Several possibilities exist to explore NectarCAM data on the grid:

* Using the ``dirac-dms-filecatalog-cli`` command:

.. code-block:: console
$ dirac-dms-filecatalog-cli
Starting FileCatalog container console.
Note that you will access several catalogs at the same time:
DIRACFileCatalog - Master
TSCatalog - Write
If you want to work with a single catalog, specify it with the -f option
FC:/> ls
prod4_sst
vo.cta.in2p3.fr
FC:/> cd /vo.cta.in2p3.fr/nectarcam/2024/20240722
FC:/vo.cta.in2p3.fr/nectarcam/2024/20240722>ls
NectarCAM.Run5568.0000.fits.fz
NectarCAM.Run5568.0001.fits.fz
NectarCAM.Run5568.0002.fits.fz
NectarCAM.Run5568.0003.fits.fz
NectarCAM.Run5568.0004.fits.fz
* Using the `COMDIRAC <https://github.com/DIRACGrid/COMDIRAC/wiki>`_ convenient features, which provides simpler aliases to DIRAC commands, such as:

* ``dls`` equivalent to ``ls`` on Linux;

* ``dget``, an alias for ``dirac-dms-get-file``, to download data from DIRAC;

* ``dsub``, an alias for ``dirac-wms-job-submit``, to submit jobs to DIRAC;

* ``dstat`` to list your active jobs on DIRAC.

To use these commands, one should start a COMDIRAC session with:

.. code-block:: console
$ dinit -p
NectarCAM data can then be explored using ``dls``:

.. code-block:: console
$ dls /vo.cta.in2p3.fr/nectarcam/2024/20240722
/vo.cta.in2p3.fr/nectarcam/2024/20240722:
NectarCAM.Run5568.0000.fits.fz
NectarCAM.Run5568.0001.fits.fz
NectarCAM.Run5568.0002.fits.fz
NectarCAM.Run5568.0003.fits.fz
NectarCAM.Run5568.0004.fits.fz
The `~nectarchain.data.management.DataManagement.findrun` method will
automatically localize NectarCAM data on DIRAC, given a run number, and fetch
the run files for you.

Tips
====

Proxy error
^^^^^^^^^^^

If from your laptop, when initializing your DIRAC proxy, you ever encounter an error such as:

.. code-block:: console
$ dirac-proxy-init -M -g cta_nectarcam
Your proxy is valid until Sat Aug 3 11:31:07 2024
; StdErr: ..........................................................
[...]
Certificate verification failed.
outdated CRL found, revoking all certs till you get new CRL
Function: certificate validation error: CRL has expired
this can be due to outdated certificates for DIRAC services stored on your computer.
One can re-synchronise them using the following command:

.. code-block:: console
$ dirac-admin-get-CAs
1 change: 1 addition & 0 deletions docs/user-guide/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ User Guide
:maxdepth: 2

getting-started
howto-dirac
troubleshooting
47 changes: 43 additions & 4 deletions docs/user-guide/troubleshooting.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,43 @@
.. _troubleshooting:

Troubleshooting
===============

.. _note_mac_users
Note to macOS users
-------------------

macOS users may experience errors when trying to initialize a proxy to DIRAC when the DIRAC support is enabled :ref:`optional-dirac-support`, especially with recent hardware equipped with M1 or M2 Apple CPU chips. The container alternative can then help having an environment with CTADIRAC fully configured. However, `Apptainer <https://apptainer.org/>`_ is `not readily available on macOS <https://apptainer.org/docs/admin/main/installation.html#mac>`_, but there is a workaround using `lima virtualization technology <https://lima-vm.io/>`_ on a Mac.
macOS users may experience errors when trying to initialize a proxy to DIRAC when the
`DIRAC support <optional-dirac-support>`_ is enabled, especially with recent
hardware equipped with M1 or M2 Apple CPU chips. Two possible workarounds are proposed
below.

Downgrading `voms`
^^^^^^^^^^^^^^^^^^

Some Mac OS users (running on M1 or M2 chips) may experience a ``M2Crypto.SSL.SSLError``
error when trying to initiate a DIRAC proxy with ``dirac-proxy-init``. During the
`installation process <optional-dirac-support>`_, instead of:

.. code-block:: console
$ mamba install dirac-grid
one may try:

.. code-block:: console
$ mamba install dirac-grid "voms=2.1.0rc2=h7a71a8a_7"
Using a container
^^^^^^^^^^^^^^^^^

The container alternative can then help having an environment with CTADIRAC fully configured.
However, `Apptainer <https://apptainer.org/>`_ is `not readily available on macOS <https://apptainer.org/docs/admin/main/installation.html#mac>`_,
but there is a workaround using `lima virtualization technology <https://lima-vm.io/>`_
on a Mac.

**TL;DR**

Expand All @@ -14,12 +48,17 @@ macOS users may experience errors when trying to initialize a proxy to DIRAC whe
$ limactl shell apptainer apptainer run --bind $HOME:/home/$USER.linux oras://ghcr.io/cta-observatory/nectarchain:latest
If you are running a Mac which CPU is based on ARM architecture (M1 or M2 Apple chips), when starting the ``apptainer`` container (second line above), please select the ``Open an editor to review or modify the current configuration`` option and add the following line at the beginning of the configuration file:
If you are running a Mac which CPU is based on ARM architecture (M1 or M2 Apple chips),
when starting the ``apptainer`` container (second line above), please select the
``Open an editor to review or modify the current configuration`` option and add the
following line at the beginning of the configuration file:

.. code-block:: console
arch: "x86_64"
otherwise, if your Mac is on an Intel CPU chip, please proceed with the ``Proceed with the current configuration`` option.
otherwise, if your Mac is on an Intel CPU chip, please proceed with the
``Proceed with the current configuration`` option.

The mount point ``/tmp/lima`` is shared between the host machine and the ``apptainer`` container, and writable from both.
The mount point ``/tmp/lima`` is shared between the host machine and the ``apptainer``
container, and writable from both.

0 comments on commit 939c858

Please sign in to comment.