diff --git a/docs/modules/hdfs/pages/index.adoc b/docs/modules/hdfs/pages/index.adoc index d82fc311..0fd3bad9 100644 --- a/docs/modules/hdfs/pages/index.adoc +++ b/docs/modules/hdfs/pages/index.adoc @@ -6,7 +6,7 @@ NOTE: This operator only works with images from the https://repo.stackable.tech/ == Roles -Three xref:home:concepts:roles-and-role-groups.adoc[roles] of the HDFS cluster are implemented: +Three xref:concepts:roles-and-role-groups.adoc[roles] of the HDFS cluster are implemented: * DataNode - responsible for storing the actual data. * JournalNode - responsible for keeping track of HDFS blocks and used to perform failovers in case the active NameNode diff --git a/docs/modules/hdfs/pages/usage-guide/logging-log-aggregation.adoc b/docs/modules/hdfs/pages/usage-guide/logging-log-aggregation.adoc index 1fe0f2af..17cff455 100644 --- a/docs/modules/hdfs/pages/usage-guide/logging-log-aggregation.adoc +++ b/docs/modules/hdfs/pages/usage-guide/logging-log-aggregation.adoc @@ -23,4 +23,4 @@ spec: ---- Further information on how to configure logging, can be found in -xref:home:concepts:logging.adoc[]. +xref:concepts:logging.adoc[]. diff --git a/docs/modules/hdfs/pages/usage-guide/monitoring.adoc b/docs/modules/hdfs/pages/usage-guide/monitoring.adoc index 1ae7cca2..8f4e6561 100644 --- a/docs/modules/hdfs/pages/usage-guide/monitoring.adoc +++ b/docs/modules/hdfs/pages/usage-guide/monitoring.adoc @@ -6,4 +6,4 @@ All services (with the exception of the Zookeeper daemon on the node names) run The metrics endpoints are also used as liveliness probes by K8S. -See xref:home:operators:monitoring.adoc[] for more details. +See xref:operators:monitoring.adoc[] for more details. diff --git a/docs/modules/hdfs/pages/usage-guide/resources.adoc b/docs/modules/hdfs/pages/usage-guide/resources.adoc index 055f20a4..1e9e9597 100644 --- a/docs/modules/hdfs/pages/usage-guide/resources.adoc +++ b/docs/modules/hdfs/pages/usage-guide/resources.adoc @@ -67,7 +67,7 @@ The hdfs-operator will re-create the StatefulSet automatically. == Resource Requests -include::home:concepts:stackable_resource_requests.adoc[] +include::concepts:stackable_resource_requests.adoc[] A minimal HA setup consisting of 3 journalnodes, 2 namenodes and 2 datanodes has the following https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[resource requirements]: diff --git a/docs/modules/hdfs/pages/usage-guide/security.adoc b/docs/modules/hdfs/pages/usage-guide/security.adoc index 8926c6e9..47670000 100644 --- a/docs/modules/hdfs/pages/usage-guide/security.adoc +++ b/docs/modules/hdfs/pages/usage-guide/security.adoc @@ -3,7 +3,7 @@ == Authentication Currently the only supported authentication mechanism is Kerberos, which is disabled by default. For Kerberos to work a Kerberos KDC is needed, which the users needs to provide. -The xref:home:secret-operator:secretclass.adoc#backend-kerberoskeytab[secret-operator documentation] states which kind of Kerberos servers are supported and how they can be configured. +The xref:secret-operator:secretclass.adoc#backend-kerberoskeytab[secret-operator documentation] states which kind of Kerberos servers are supported and how they can be configured. IMPORTANT: Kerberos is supported staring from HDFS version 3.3.x @@ -12,7 +12,7 @@ To configure HDFS to use Kerberos you first need to collect information about yo Additionally you need a service-user, which the secret-operator uses to create create principals for the HDFS services. === 2. Create Kerberos SecretClass -Afterwards you need to enter all the needed information into a SecretClass, as described in xref:home:secret-operator:secretclass.adoc#backend-kerberoskeytab[secret-operator documentation]. +Afterwards you need to enter all the needed information into a SecretClass, as described in xref:secret-operator:secretclass.adoc#backend-kerberoskeytab[secret-operator documentation]. The following guide assumes you have named your SecretClass `kerberos-hdfs`. === 3. Configure HDFS to use SecretClass @@ -55,7 +55,7 @@ We have an https://github.com/stackabletech/hdfs-operator/blob/main/tests/templa == Authorization We currently don't support authorization yet. -In the future support will be added by writing an opa-authorizer to match our general xref:home:concepts:opa.adoc[] mechanisms. +In the future support will be added by writing an opa-authorizer to match our general xref:concepts:opa.adoc[] mechanisms. In the meantime a very basic level of authorization can be reached by using `configOverrides` to set the `hadoop.user.group.static.mapping.overrides` property. In thew following example the `dr.who=;nn=;nm=;jn=;` part is needed for HDFS internal operations and the user `testuser` is granted admin permissions.