diff --git a/3.4.2/404.html b/3.4.2/404.html new file mode 100644 index 0000000000..e6c7e87393 --- /dev/null +++ b/3.4.2/404.html @@ -0,0 +1,1232 @@ + + + +
+ + + + + + + + + + + + + + + +Alerts are sent from Hopsworks using Prometheus' +Alert manager. +In order to send alerts we first need to configure the Alert manager. +To do that click on your name in the top right corner of the navigation bar and choose Cluster Settings from the dropdown menu. +In the Cluster Settings' Alerts tab you can configure the alert manager to send alerts +via email, slack or pagerduty.
+ + +To send alerts via email you need to configure an SMTP server. Click on the Configure +button on the left side of the email row and fill out the form that pops up.
+ + +Optionally cluster wide Email alert receivers can be added in Default receiver emails. +These receivers will be available to all users when they create event triggered alerts.
+Alert can also be sent via Slack message. To be able to send Slack messages you first need to configure +a Slack webhook. Click on the Configure button on the left side of the slack row and past in your +Slack webhook in Webhook.
+ + +Optionally cluster wide Slack alert receivers can be added in Slack channel/user. +These receivers will be available to all users when they create event triggered alerts.
+Pagerduty is another way you can send alerts from Hopsworks. Click on the Configure button on the left side of +the pagerduty row and fill out the form that pops up.
+ + +Fill in Pagerduty URL: the URL to send API requests to.
+Optionally cluster wide Pagerduty alert receivers can be added in Service key/Routing key. +By first choosing the PagerDuty integration type:
+Events API v2
.Prometheus
.Then adding the Service key/Routing key of the receiver(s). PagerDuty provides +documentation on how to integrate with +Prometheus' Alert manager.
+If you are familiar with Prometheus' Alert manager +you can also configure alerts by editing the yaml/json file directly.
+ + +Example: Adding the yaml snippet shown below in the global section of the alert manager configuration will +have the same effect as creating the SMTP configuration as shown in section 1 above.
+global:
+ smtp_smarthost: smtp.gmail.com:587
+ smtp_from: hopsworks@gmail.com
+ smtp_auth_username: hopsworks@gmail.com
+ smtp_auth_password: XXXXXXXXX
+ smtp_auth_identity: hopsworks@gmail.com
+ ...
+
To test the alerts by creating triggers from Jobs and Feature group validations see Alerts.
+ + +To configure Authentication methods click on your name in the top right corner of the navigation bar and choose +Cluster Settings from the dropdown menu. +In the Cluster Settings Authentication tab you can configure how users authenticate.
+TOTP Two-factor Authentication: can be disabled, optional or mandatory. If set to mandatory all users are + required to set up two-factor authentication when registering.
+Note
+If two-factor is set to mandatory on a cluster with preexisting users all users will need to go through
+lost device recovery step to enable two-factor. So consider setting it to optional first and allow users to
+enable it before setting it to mandatory.
OAuth2: if your organization already have an identity management system compatible with + OpenID Connect (OIDC) you can configure Hopsworks to use your identity provider + by enabling OAuth as shown in the figure below. After enabling OAuth + you can register your identity provider by clicking on Add Identity Provider button. See + Create client for details.
+In the figure above we see a cluster with Two-factor authentication disabled, OAuth enabled with one registered +identity provider and LDAP authentication enabled.
+ + +The state of the Hopsworks cluster is divided into data and metadata and distributed across the different node groups. This section of the guide allows you to take a consistent backup between data in the offline and online feature store as well as the metadata.
+The following services contain critical state that should be backed up:
+Backing up service/application metrics and services/applications logs are out of the scope of this guide. By default metrics and logs are rotated after 7 days. Application logs are available on HopsFS when the application has finished and, as such, are backed up with the rest of HopsFS’ data.
+Apache Kafka and OpenSearch are additional services maintaining state. The OpenSearch metadata can be reconstructed from the metadata stored on RonDB.
+Apache Kafka is used in Hopsworks to store the in-flight data that is on its way to the online feature store. In the event of a total loss of the cluster, running jobs with inflight data will have to be replayed.
+Hopsworks adopts an Infrastructure-as-code philosophy, as such all the configuration files for the different Hopsworks services are generated during the deployment phase. Cluster-specific customizations should be centralized in the cluster definition used to deploy the cluster. As such the cluster definition should be backed up (e.g., by committing it to a git repository) to be able to recreate the same cluster in case it needs to be recreated.
+The RonDB backup is divided into two parts: user and privileges backup and data backup.
+To take the backup of users and privileges you can run the following command from any of the nodes in the head node group. This command generates a SQL file containing all the user definitions for both the metadata services (Hopsworks, HopsFS, Metastore) as well as the user and permission grants for the online feature store. This command needs to be run as user ‘mysql’ or with sudo privileges.
+/srv/hops/mysql/bin/mysqlpump -S /srv/hops/mysql-cluster/mysql.sock --exclude-databases=% --exclude-users=root,mysql.sys,mysql.session,mysql.infoschema --users > users.sql
+
The second step is to trigger the backup of the data. This can be achieved by running the following command as user ‘mysql’ on one of the nodes of the head node group.
+/srv/hops/mysql-cluster/ndb/scripts/mgm-client.sh -e "START BACKUP [replace_backup_id] SNAPSHOTEND WAIT COMPLETED"
+
The backup ID is an integer greater or equal than 1. The script uses the following: $(date +'%y%m%d%H%M')
instead of an integer as backup id to make it easier to identify backups over time.
The command instructs each RonDB datanode to backup the data it is responsible for. The backup will be located locally on each datanode under the following path:
+/srv/hops/mysql-cluster/ndb/backups/BACKUP - the directory name will be BACKUP-[backup_id]
+
A more comprehensive backup script is available here - The script includes the steps above as well as collecting all the partial RonDB backups on a single node. The script is a good starting point and can be adapted to ship the database backup outside the cluster.
+HopsFS is a distributed file system based on Apache HDFS. HopsFS stores its metadata in RonDB, as such metadata backup has already been discussed in the section above. The data is stored in the form of blocks on the different data nodes. +For availability reasons, the blocks are replicated across three different data nodes.
+Within a node, the blocks are stored by default under the following directory, under the ownership of the ‘hdfs’ user:
+/srv/hopsworks-data/hops/hopsdata/hdfs/dn/
+
To safely backup all the data, a copy of all the datanodes should be taken. As the data is replicated across the different nodes, excluding a set of nodes might result in data loss.
+Additionally, as HopsFS blocks are files on the file system and the filesystem can be quite large, the backup is not transactional. Consistency is dictated by the metadata. Blocks being added during the copying process will not be visible when restoring as they are not part of the metadata backup taken prior to cloning the HopsFS blocks.
+When the HopsFS data blocks are stored in a cloud block storage, for example, Amazon S3, then it is sufficient to only backup the metadata. The blob cloud storage service will ensure durability of the data blocks.
+As with the backup phase, the restore operation is broken down in different steps.
+The first step to redeploy the cluster is to redeploy the binaries and configuration. You should reuse the same cluster definition used to deploy the first (original) cluster. This will re-create the same cluster with the same configuration.
+The deployment step above created a functioning empty cluster. To restore the cluster, the first step is to restore the metadata and online feature store data stored on RonDB. +To restore the state of RonDB, we first need to restore its schemas and tables, then its data, rebuild the indices, and finally restore the users and grants.
+This command should be executed on one of the nodes in the head node group and is going to recreate the schemas, tables, and internal RonDB metadata. In the command below, you should replace the node_id with the id of the node you are running the command on, backup_id with the id of the backup you want to restore. Finally, you should replace the mgm_node_ip with the address of the node where the RonDB management service is running.
+/srv/hops/mysql/bin/ndb_restore -n [node_id] -b [backup_id] -m --disable-indexes --ndb-connectstring=[mgm_node_ip]:1186 --backup_path=/srv/hops/mysql-cluster/ndb/backups/BACKUP/BACKUP-[backup_id]
+
This command should be executed on all the RonDB datanodes. Each command should be customized with the node id of the node you are trying to restore (i.e., replace the node_id). As for the command above you should replace the backup_id and mgm_node_ip.
+/srv/hops/mysql/bin/ndb_restore -n [node_id] -b [backup_id] -r --ndb-connectstring=[mgm_node_ip]:1186 --backup_path=/srv/hops/mysql-cluster/ndb/backups/BACKUP/BACKUP-[backup_id]
+
In the first command we disable the indices for recovery. This last command will take care of enabling them again. This command needs to run only once on one of the nodes of the head node group. As for the commands above, you should replace node_id, backup_id and mgm_node_id.
+/srv/hops/mysql/bin/ndb_restore -n [node_id] -b [backup_id] --rebuild-indexes --ndb-connectstring=[mgm_node_ip]:1186 --backup_path=/srv/hops/mysql-cluster/ndb/backups/BACKUP/BACKUP-[backup_ip]
+
In the backup phase, we took the backup of the user and grants separately. The last step of the RonDB restore process is to re-create all the users and grants both for Hopsworks services as well as for the online feature store users. This can be achieved by running the following command on one node of the head node group:
+/srv/hops/mysql-cluster/ndb/scripts/mysql-client.sh source users.sql
+
With the metadata restored, you can now proceed to restore the file system blocks on HopsFS and restart the file system. When starting the datanode, it will advertise it’s ID/ClusterID and Storage ID based on the VERSION file that can be found in this directory:
+/srv/hopsworks-data/hops/hopsdata/hdfs/dn/current
+
It’s important that all the datanodes are restored and they report their block to the namenodes processes running on the head nodes. By default the namenodes in HopsFS will exit “SAFE MODE” (i.e., the mode that allows only read operations) only when the datanodes have reported 99.9% of the blocks the namenodes have in the metadata. As such, the namenodes will not resume operations until all the file blocks have been restored.
+The OpenSearch state can be rebuilt using the Hopsworks metadata stored on RonDB. The rebuild process is done by using the re-indexing mechanism provided by ePipe. +The re-indexing can be triggered by running the following command on the head node where ePipe is running:
+/srv/hops/epipe/bin/reindex-epipe.sh
+
The script is deployed and configured during the platform deployment.
+The backup and restore plan doesn’t cover the data in transit in Kafka, for which the jobs producing it will have to be replayed. However, the RonDB backup contains the information necessary to recreate the topics of all the feature groups. +You can run the following command, as super user, to recreate all the topics with the correct partitioning and replication factors:
+/srv/hops/kafka/bin/kafka-restore.sh
+
The script is deployed and configured during the platform deployment.
+ + +At a high level a Hopsworks cluster can be divided into 4 groups of nodes. Each node group should be deployed according to the requirements (e.g., 3/5/7 nodes for the head node group) to guarantee the availability of the components.
+Example deployment:
+ + +For higher availability, a Hopsworks cluster should be deployed across multiple availability zones, however, a single cluster cannot be deployed across multiple regions. Multiple region deployments are out of the scope of this guide.
+A different service placement is also possible, e.g., separating RonDB data nodes between metadata and online feature store or adding more replicas of a metadata service without necessarily adding a whole new head node, however, this is outside the scope of this guide.
+ + +The Hopsworks Feature Store is the underlying component powering enterprise ML pipelines as well as serving feature data to model making user facing predictions. Sometimes the Hopsworks cluster can experience hardware failures or power loss, to help you plan for these occasions and avoid Hopsworks Feature Store downtime, we put together this guide. This guide is divided into three sections:
+Using an EC2 instance profile enables your Hopsworks cluster to access AWS resources. +This forces all Hopsworks users to share the instance profile role and the resource access policies attached to +that role. To allow for per project access policies you could have your users use AWS credentials directly in +their programs which is not recommended so you should instead use Role chaining. +To use Role chaining, you need to first setup IAM roles in AWS:
+Step 1. Create an instance profile role with policies that will allow it to assume all resource roles that we can + assume +from the Hopsworks cluster.
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "AssumeDataRoles",
+ "Effect": "Allow",
+ "Action": "sts:AssumeRole",
+ "Resource": [
+ "arn:aws:iam::123456789011:role/test-role",
+ "arn:aws:iam::xxxxxxxxxxxx:role/s3-role",
+ "arn:aws:iam::xxxxxxxxxxxx:role/dev-s3-role",
+ "arn:aws:iam::xxxxxxxxxxxx:role/redshift"
+ ]
+ }
+ ]
+}
+
Step 2. Create the resource roles and edit trust relationship and add policy document that will allow the instance + profile +to assume this role.
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::xxxxxxxxxxxx:role/instance-profile"
+ },
+ "Action": "sts:AssumeRole"
+ }
+ ]
+}
+
Role chaining allows the instance profile to assume any role in the policy attached in step 1. To limit access to +iam roles we can create a per-project mapping from the admin page in Hopsworks.
+ + +Click on your name in the top right corner of the navigation bar and choose Cluster Settings from the dropdown menu. +In the Cluster Settings' IAM Role Chaining tab you can configure the mappings between projects and IAM roles. +You can add mappings by entering the project name, which roles in that project can access the cloud role and the +role ARN. +Optionally you can set a role mapping as default by marking the default checkbox. The default roles can be changed from +the project setting by a Data owner in that project.
+ + +Any member of a project can then go to the Project Settings -> +Assuming IAM Roles page to see which roles they can assume.
+ + +Hopsworks has a cluster management page that allows you, the administrator, to perform management actions, +monitor and control Hopsworks.
+To access the cluster management page you should log in into Hopsworks using your administrator account. +In the top right corner, click on your name in the top right corner of the navigation bar and choose Cluster Settings from the dropdown menu.
+ + +Kerberos need some server configuration before you can enable it from the UI. For instruction on how to +configure your hopsworks server see +Server Configuration for Kerberos
+After configuring the server you can configure Authentication methods by clicking on your name in the top right +corner of the navigation bar and choosing Cluster Settings from the dropdown menu. +In the Authentication tab you can find in Cluster Settings, you can enable Kerberos by clicking on the Kerberos checkbox.
+If LDAP/Kerberos checkbox is not checked, make sure that you configured your application server and enable it by +clicking on the checkbox.
+ + +Finally, click on edit configuration and fill in the attributes.
+ + +Directory Administrators->HOPS_ADMIN;IT People-> HOPS_USER
. Default
+ is empty. If no mapping is specified, users need to be assigned a role by an admin before they can log in.uid=%s
.givenName
.sn
.mail
.uid=%s
.krbPrincipalName=%s
.member=%d
.cn
.memberOf
.All defaults are taken from OpenLDAP.
+The login page will now have the choice to use Kerberos for authentication.
+ + +Note
+Make sure that you have Kerberos properly configured on your computer and you are logged in. +Kerberos support must also be configured on the browser to use Kerberos for authentication.
+LDAP need some server configuration before you can enable it from the UI. For instruction on how to +configure your hopsworks server see +Server Configuration for LDAP
+After configuring the server you can configure Authentication methods by clicking on your name in the top right +corner of the navigation bar and choosing Cluster Settings from the dropdown menu. +In the Authentication tab you can find in Cluster Settings, you can enable LDAP by clicking on the LDAP checkbox.
+If LDAP/Kerberos checkbox is not checked make sure that you configured your application server and enable it by +clicking on the checkbox.
+ + +Finally, click on edit configuration and fill in the attributes.
+ + +Directory Administrators->HOPS_ADMIN;IT People-> HOPS_USER
. Default
+ is empty. If no mapping is specified, users need to be assigned a role by an admin before they can log in.uid=%s
.givenName
.sn
.mail
.uid=%s
.member=%d
.cn
.memberOf
.All defaults are taken from OpenLDAP.
+The login page will now have the choice to use LDAP for authentication.
+ + + +LDAP and Kerberos integration need some configuration in the Karamel +cluster definition used to deploy your hopsworks cluster.
+The LDAP attributes below are used to configure JNDI external resource in Payara. The JNDI resource will communicate +with your LDAP server to perform the authentication.
+ldap:
+ enabled: true
+ jndilookupname: "dc=hopsworks,dc=ai"
+ provider_url: "ldap://193.10.66.104:1389"
+ attr_binary_val: "entryUUID"
+ security_auth: "none"
+ security_principal: ""
+ security_credentials: ""
+ referral: "ignore"
+ additional_props: ""
+
The Kerberos attributes are used to configure SPNEGO. +SPNEGO is used to establish a secure context between the requester and the application server when using Kerberos +authentication.
+kerberos:
+ enabled: true
+ krb_conf_path: "/etc/krb5.conf"
+ krb_server_key_tab_path: "/etc/security/keytabs/service.keytab"
+ krb_server_key_tab_name: "service.keytab"
+ spnego_server_conf: '\nuseKeyTab=true\nprincipal=\"HTTP/server.hopsworks.ai@HOPSWORKS.AI\"\nstoreKey=true\nisInitiator=false'
+ldap:
+ jndilookupname: "dc=hopsworks,dc=ai"
+ provider_url: "ldap://193.10.66.104:1389"
+ attr_binary_val: "objectGUID"
+ security_auth: "none"
+ security_principal: ""
+ security_credentials: ""
+ referral: "ignore"
+ additional_props: ""
+
Both Kerberos and LDAP attributes need to be specified to configure Kerberos. The LDAP attributes are explained above.
+This example uses Azure Active Directory as the identity provider, but the same can be done with any identity provider +supporting OAuth2.
+To use OAuth2 in hopsworks you first need to create and configure an OAuth client in your identity provider. We will take the example of Azure AD for the remaining of this documentation, but equivalent steps can be taken on other identity providers.
+Navigate to the Microsoft Azure Portal and authenticate. Navigate to Azure Active Directory. Click on App Registrations. Click on New Registration.
++ +
+ +Enter a name for the client such as hopsworks_oauth_client. Verify the Supported account type is set to Accounts in this organizational directory only. And Click Register.
++ +
+ +In the Overview section, copy the Application (client) ID field. We will use it in +Identity Provider registration under the name Client id.
++ +
+ +Click on Endpoints and copy the OpenId Connect metadata document endpoint excluding the .well-known/openid-configuration part. +We will use it in Identity Provider registration under the name Connection URL.
++ +
+ +Click on Certificates & secrets, then Click on New client secret.
++ +
+ +Add a description of the secret. Select an expiration period. And, Click Add.
++ +
+ +Copy the secret. This will be used in Identity Provider registration under the name +Client Secret.
++ +
+ +Click on Authentication. Then click on Add a platform
++ +
+ +In Configure platforms click on Web.
++ +
+ +Enter the Redirect URI and click on Configure. The redirect URI is HOPSWORKS-URI/callback with HOPSWORKS-URI the URI of your hopsworks cluster.
++ +
+ +Note
+If your hopsworks cluster is created on the cloud (hopsworks.ai), +you can find your HOPSWORKS-URI by going to the hopsworks.ai dashboard +in the General tab of your cluster and copying the URI.
+Before registering your identity provider in Hopsworks you need to create a client application in your identity provider and +acquire a client id and a client secret. An example on how to create a client using Okta +and Azure Active Directory +identity providers can be found here and here respectively.
+After acquiring the client id and client secret create the client in Hopsworks by enabling OAuth2 +and clicking on add another identity provider in the Authentication configuration page. Then set +base uri of your identity provider in Connection URL give a name to your identity provider (the name will be used +in the login page as an alternative login method) and set the client id and client secret in their respective +fields, as shown in the figure below.
+ + +Additional configuration can be set here:
+Optionally you can add a group mapping from your identity provider to hopsworks groups, by clicking on your name in the +top right corner of the navigation bar and choosing Cluster Settings from the dropdown menu. In the Cluster +Settings Configuration tab search for oauth_group_mapping and click on the edit button.
+ + +Note
+Setting oauth_group_mapping to ANY_GROUP->HOPS_USER will assign the role user to any user from any group in +your identity provider when they log into hopsworks with OAuth for the first time. You can replace ANY_GROUP with +the group of your choice in the identity provider. You can replace HOPS_USER by HOPS_ADMIN if you want the +users of that group to be admins in hopsworks. You can do several mappings by separating them with a semicolon.
+Users will now see a new button on the login page. The button has the name you set above for Name and will +redirect to your identity provider.
+ + +Note
+When creating a client make sure you can access the provider metadata by making a GET request on the well known
+endpoint of the provider. The well-known URL, will typically be the Connection URL plus
+.well-known/openid-configuration
. For the above client it would be
+https://dev-86723251.okta.com/.well-known/openid-configuration
.
This example uses an Okta development account to create an application that will represent a Hopsworks client in the +identity provider. To create a developer account go to Okta developer.
+After creating a developer account register a client by going to Applications and click on Create App Integration.
+ + +This will open a popup as shown in the figure below. Select OIDC as Sign-in-method and Web Application as
+Application type and click next.
+
+
+
Give your application a name and select Client credential as Grant Type. Then add a Sign-in redirect URI +that is your Hopsworks cluster domain name (including the port number if needed) with path /callback, and a Sign-out +redirect URI that is Hopsworks cluster domain name (including the port number if needed) with no path.
+ + +If you want to limit who can access your Hopsworks cluster select Limit access to selected groups and +select group(s) you want to give access to. Here we will allow everyone in the organization to access the cluster.
+ + +You can also create mappings from groups in Okta to groups in Hopsworks. To achieve this you need to configure Okta to +send Groups with user information. To do this go to Applications and select your application name. In the Sign +On tab click edit OpenID Connect ID Token and select Filter for Groups claim type, then for Groups claim +filter add groups as the claim name, select Match Regex from the dropdown and .* (dot star) as Regex to +match all groups. See Group mapping on how to do the mapping in Hopsworks.
+ + +After the application is created go back to Applications and click on the application you just created. Use the +Okta domain (Connection URL), client id and client secret generated for your app in the +Identity Provider registration in Hopsworks.
+ + +Note
+When copying the domain in the figure above make sure to add the url scheme (http:// or https://) when using it +in the Connection URL in the Identity Provider registration form.
+Hopsworks provides administrators with a view of the status/health of the cluster. +This information is provided through the Services page. +You can find the Services page by clicking on your name, in the top right corner of the navigation bar, and choosing +Cluster Settings from the dropdown menu and going to the Services tab.
+ + +This page give administrators an overview of which services are running on the cluster. +It provides information about their status as reported by agents that monitor the status of the different +Systemd units.
+Columns in the services table represent machines in your cluster. Each service running on a machine will have a status +running (green) or stopped (red). If a service is not installed on a machine it will have a status not installed +(gray). +Services are divided into groups, and you can search for a service by its name or group. You can also search for +machines by their host name.
+ + +After you find the correct service you will be able to start, stop or restart it, by clicking on its status.
+ + +Note
+Stopping some services like the web server (glassfish_domain1) is not recommended. If you stop it you will have to
+access the machine running the service and start it with systemctl start glassfish_domain1
.
Whether you run Hopsworks on-premise, or on the cloud using hopsworks.ai, +you have a Hopsworks cluster which contains all users and projects.
+All the users of your Hopsworks instance have access to your cluster with different access rights. +You can find them by clicking on your name in the top right corner of the navigation bar and choosing Cluster +Settings from the dropdown menu and going to the Users tab (You need to have Admin role to get access to the +Cluster Settings page).
+ + +Roles let you manage the access rights of a user to the cluster.
+By default, a user who register on Hopsworks using their own credentials are not granted access to the cluster. +First, a user with an admin role needs to validate their account.
+By clicking on the Review Requests button you can open a user request review popup as shown in the image below.
+ + +On the user request review popup you can activate or block users. Users with a validated email address will have a +check mark on their email.
+Similarly, if a user is no longer allowed access to the cluster you can block them. To keep consistency with the
+history of your datasets, a user can not be deleted but only blocked. If necessary a user can be
+deleted manually in the cluster using the command line.
+You can block a user by clicking on the block icon on the right side of the user in the list.
Blocked users will appear on the lower section of the page. Click on display blocked users to show all the blocked +users in your cluster. If a user is blocked by mistake you can reactivate it by clicking on the check mark icon +that corresponds to that user in the blocked users list.
+You can also change the role of a user by clicking on the select dropdown that shows the current role of the user.
+If there are too many users in your cluster, use the search box (available for blocked users too) to filter users by +name or email. It is also possible to filter activated users by role. For example to see all administrators in you +cluster click on the select dropdown to the right of the search box and choose Admin.
+If you want to allow users to login without registering you can pre-create them by clicking on New user.
+ + +After setting the user's name and email chose the type of user you want to create (Hopsworks, Kerberos or LDAP). To +create a Kerberos or LDAP user you need to get the users UUID from the Kerberos or LDAP server. Hopsworks user +can also be assigned a Role. Kerberos and LDAP users on the other hand can only be assigned a role through group +mapping.
+A temporary password will be generated and displayed when you click on Create new user. Copy the password and pass +it securely to the user.
+ + +In the case where a user loses her/his password and can not recover it with the +password recovery, an administrator can reset it for them.
+On the bottom of the Users page click on the Reset a user password link. A popup window with a dropdown for +searching users by name or email will open. Find the user and click on Reset new password.
+ + +A temporary password will be displayed. Copy the password and pass it to the user securely.
+ + +A user with a temporary password will see a warning message when going to User settings Authentication tab.
+ + +Note
+A temporary password should be changed as soon as possible.
+\n {translation(\"search.result.term.missing\")}: {...missing}\n
\n }\n