Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/docs/404.html b/docs/404.html new file mode 100644 index 00000000..28b84509 --- /dev/null +++ b/docs/404.html @@ -0,0 +1,190 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +Even though the options and tools in the Admin UI should be fully documented, I wanted to mention argon2id tuning here +anyway.
+Rauthy uses the argon2id hashing algorithm for passwords. This is the most expensive and compute-heavy operation
+beging done by the application and the variables need to be tuned for every deployment to provide the best compromise
+of security, resource usage and user experience.
+The default values are way too low for a real production deployment. They should only be used for testing.
The Admin UI provides a utility which helps you find the values for your deployment quickly. What and how to do is +described in the Admin UI itself, I just want to guide you to this utility especially, since it is an important step +security wise.
+When you are logged in to the Admin UI, please navigate to Config
-> Argon2 Parameters
to find your values.
+After they have been found, apply them to the rauthy config and restart the deployment.
+Keep in mind, that if you run the application in a way, where memory is limited, for instance inside Kubernetes with
+resource limits set too low, that it will crash, if either ARGON2_M_COST
is set too high or the memory limit too low.
There is one additional, really important config variable need to be taken into account for the tuning.
+Since this operation is really ressource intense, you can limit the amount of threads, which can run in parallel doing
+hashing operations. This is really important when we think about constrained memory again.
MAX_HASH_THREADS
limits the maximum amount of parallel password hashes at the exact same time to never exceed system
+memory while still allowing a good amount of memory.
+The default value is 2.
The rule is simple: Allow as many resources as possible for hashing to have the maximum amount of security, while +restricting it as much as necessary.
+For smaller deployments, set MAX_HASH_THREADS=1
, which will technically allows only one user login at the exact same
+time. This value makes an external rate limiting for the login obsolete (while you may add some for the others).
If Rauthy is using a SQLite, it does automatic backups, which can be configured with:
+# Cron job for automatic data store backups (default: "0 0 4 * * * *")
+# sec min hour day_of_month month day_of_week year
+BACKUP_TASK="0 0 4 * * * *"
+
+# The name for the data store backups. The current timestamp will always be appended automatically.
+# default: rauthy-backup-
+BACKUP_NAME="rauthy-backup-"
+
+# All backups older than the specified hours will be cleaned up automatically (default: 720)
+BACKUP_RETENTION_LOCAL=24
+
+All these backups are written inside the pod / container into /app/data/backup
.
+The database itself will be saved in /app/data
by default.
This difference makes it possible, that you could add a second volume mount to the container.
+You then have the database itself on a different disk than the backups, which is the most simple and straight forward
+approach to have a basic backup strategy.
The SQLite backups are done with VACUUM
, which means you can just use the backups as a normal database again.
+This makes it possible, to just use the Database Migration feature to apply backups very easily.
If you are using Postgres as the main database, Rauthy does not do any backups.
+There are a lot of way better tools out there to handle this task.
This shows a full example config with (hopefully) every value nicely described.
+When you go into production, make sure that you provide the included secrets / sensistive information in this +file in an appropriate way. With docker, you can leave them inside this file, but when deploying with Kubernetes, +extract these values, create Kubernetes Secrets and provide them as environment variables.
+#####################################
+############## ACCESS ###############
+#####################################
+
+# If the User Registration endpoint should be accessible by anyone. If not, an admin must create each new user.
+# (default: false)
+#OPEN_USER_REG=true
+
+# Can be used when 'OPEN_USER_REG=true' to restrict the domains for a registration. For instance, set it to
+# 'USER_REG_DOMAIN_RESTRICTION=gmail.com' to allow only registrations with 'user@gmail.com'.
+# default: ''
+#USER_REG_DOMAIN_RESTRICTION=some-domain.com
+
+#####################################
+############# BACKUPS ###############
+#####################################
+
+# Cron job for automatic data store backups (default: "0 0 4 * * * *")
+# sec min hour day_of_month month day_of_week year
+#BACKUP_TASK="0 0 4 * * * *"
+
+# The name for the data store backups. The current timestamp will always be appended automatically. (default: rauthy-backup-)
+#BACKUP_NAME="rauthy-backup-"
+
+# All backups older than the specified hours will be cleaned up automatically (default: 720)
+#BACKUP_RETENTION_LOCAL=720
+
+#####################################
+############## CACHE ################
+#####################################
+
+# If the cache should start in HA mode or standalone
+# accepts 'true|false', defaults to 'false'
+#HA_MODE=false
+
+# The connection strings (with hostnames) of the HA instances as a CSV
+# Format: 'scheme://hostname:port'
+#HA_HOSTS="https://rauthy-0.rauthy:8080, https://rauthy-1.rauthy:8080 ,https://rauthy-2.rauthy:8080"
+
+# Overwrite the hostname which is used to identify each cache member.
+# Useful in scenarios, where for instance all members are on the same host with different ports or for testing.
+#HOSTNAME_OVERWRITE="rauthy-0.rauthy:8080"
+
+## Define buffer sizes for channels between the components
+# Buffer for client requests on the incoming stream - server side (default: 128)
+# Make sense to have the CACHE_BUF_SERVER set to: `(number of total HA cache hosts - 1) * CACHE_BUF_CLIENT`
+# In a non-HA deployment, set the same size for both
+#CACHE_BUF_SERVER=128
+# Buffer for client requests to remote servers for all cache operations (default: 128)
+#CACHE_BUF_CLIENT=128
+
+# Secret token, which is used to authenticate the cache members
+#CACHE_AUTH_TOKEN=SomeSuperSecretAndVerySafeToken1337
+
+## Connections Timeouts
+
+# The Server sends out keepalive pings with configured timeouts
+# The keepalive ping interval in seconds (default: 5)
+#CACHE_KEEPALIVE_INTERVAL=5
+# The keepalive ping timeout in seconds (default: 5)
+#CACHE_KEEPALIVE_TIMEOUT=5
+
+# The timeout for the leader election. If a newly saved leader request has not reached quorum after the timeout, the
+# leader will be reset and a new request will be sent out.
+# CAUTION: This should not be lower than CACHE_RECONNECT_TIMEOUT_UPPER, since cold starts and elections will be
+# problematic in that case.
+# value in seconds, default: 15
+#CACHE_ELECTION_TIMEOUT=15
+
+# These 2 values define the reconnect timeout for the HA Cache Clients.
+# The values are in ms and a random between these 2 will be chosen each time to avoid conflicts and race conditions
+# (default: 7500)
+#CACHE_RECONNECT_TIMEOUT_LOWER=7500
+# (default: 10000)
+#CACHE_RECONNECT_TIMEOUT_UPPER=10000
+
+#####################################
+############ DATABASE ###############
+#####################################
+
+# The database driver will be chosen at runtime depending on the given DATABASE_URL format. Examples:
+# Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:'
+# Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName'
+#
+# NOTE: The password in this case should be alphanumeric. Special characters could cause problems in the connection
+# string.
+#
+# CAUTION: To make the automatic migrations work with Postgres15, when you do not want to just use the `postgres` user,
+# You need to have a user with the same name as the DB / schema. For instance, the following would work without
+# granting extra access to the `public` schema which is disabled by default since PG15:
+# database: rauthy
+# user: rauthy
+# schema: rauthy with owner rauthy
+#
+#DATABASE_URL=sqlite::memory:
+#DATABASE_URL=sqlite:data/rauthy.db
+#DATABASE_URL=postgresql://rauthy:123SuperSafe@localhost:5432/rauthy
+
+# Max DB connections - irrelevant for SQLite (default: 5)
+#DATABASE_MAX_CONN=5
+
+# If specified, the current Database, set with DATABASE_URL, will be DELETED and OVERWRITTEN with a migration from the
+# given database with this variable. Can be used to migrate between different databases.
+# !!! USE WITH CARE !!!
+#MIGRATE_DB_FROM=sqlite:data/rauthy.db
+
+# Disables the housekeeping schedulers (default: false)
+#SCHED_DISABLE=true
+
+#####################################
+############# E-MAIL ################
+#####################################
+
+SMTP_USERNAME=
+#SMTP_PASSWORD=
+SMTP_URL=
+# Format: "Rauthy <rauthy@localhost.de>"
+SMTP_FROM=
+
+#####################################
+###### Encryption / Hashing #########
+#####################################
+
+# Format: "key_id/enc_key another_key_id/another_enc_key" - the enc_key itself must be exactly 32 characters long and
+# and should not contain special characters.
+# The ID must match '[a-zA-Z0-9]{2,20}'
+#ENC_KEYS="bVCyTsGaggVy5yqQ/S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4 q6u26onRvXVG4427/3CEC8RJWBcMkrBMkRXgx65AmJsNTghSA"
+ENC_KEY_ACTIVE=bVCyTsGaggVy5yqQ
+
+# M_COST should never be below 32768 in production
+ARGON2_M_COST=32768
+# T_COST should never be below 1 in production
+ARGON2_T_COST=3
+# P_COST should never be below 2 in production
+ARGON2_P_COST=2
+
+# Limits the maximum amount of parallel password hashes at the exact same time to never exceed system memory while
+# still allowing a good amount of memory for the argon2id algorithm (default: 2)
+# CAUTION: You must make sure, that you have at least (MAX_HASH_THREADS * ARGON2_M_COST / 1024) + 30 MB of memory
+# available.
+MAX_HASH_THREADS=1
+
+# The time in ms when to log a warning, if a request waited longer than this time.
+# This is an indicator, that you have more concurrent logins than allowed and may need config adjustments,
+# if this happens more often. (default: 500)
+#HASH_AWAIT_WARN_TIME=500
+
+#####################################
+####### LIFETIMES / TIMEOUTS ########
+#####################################
+
+# Set the grace time in seconds for how long in seconds the refresh token should still be valid after usage.
+# Keep this value small, but do not set it to 0 with an HA deployment to not get issues with small HA cache latencies.
+#
+# If you have an external client, which does concurrent requests, from which the request interceptor wants to refresh
+# the token, you may have multiple hits on the endpoint and all of them should be valid.
+#
+# Caching is done on the endpoint itself, but grace time of 0 will only be good for a single instance of rauthy.
+# default: 5
+#REFRESH_TOKEN_GRACE_TIME=5
+
+# Lifetime for offline tokens in hours (default: 720)
+#OFFLINE_TOKEN_LIFETIME=720
+
+# Session lifetime in seconds - the session can not be extended beyond this time and a new login will be forced.
+# This is the session for the authorization code flow. (default: 14400)
+#SESSION_LIFETIME=14400
+
+# If 'true', a 2FA / MFA check will be done with each automatic token generation, even with an active session, which
+# kind of makes the session useless with Webauthn enabled, but provides maximum amount of security.
+# If 'false', the user will not get a MFA prompt with an active session at the authorization endpoint.
+# (default: false)
+#SESSION_RENEW_MFA=false
+
+# Session timeout in seconds
+# When a new token / login is requested before this timeout hits the limit, the user will be authenticated without
+# prompting for the credentials again.
+# This is the value which can extend the session, until it hits its maximum lifetime set with SESSION_LIFETIME.
+#SESSION_TIMEOUT=5400
+
+# ML: magic link
+# LT: lifetime
+# Lifetime in minutes for reset password magic links (default: 30)
+#ML_LT_PWD_RESET=30
+
+# Lifetime in minutes for the first password magic link, for setting the initial password. (default: 86400)
+#ML_LT_PWD_FIRST=86400
+
+#####################################
+############# LOGGING ###############
+#####################################
+
+# This is the log level for stdout logs
+# Accepts: error, info, debug, trace (default: info)
+#LOG_LEVEL=info
+
+# This is a special config which allows the configuration of customized access logs.
+# These logs will be logged with each request in addition to the normal LOG_LEVEL logs.
+# The following values are valid:
+# - Debug
+# CAUTION: The Debug setting logs every information available to the middleware which includes SENSITIVE HEADERS
+# DO NOT use the Debug level in a working production environment!
+# - Verbose
+# Verbose logging without headers - generates huge outputs
+# - Basic
+# Logs access to all endpoints apart from the Frontend ones which all js, css, ...
+# - Modifying
+# Logs only requests to modifying endpoints and skips all GET
+# - Off
+# (default: Modifying)
+LOG_LEVEL_ACCESS=Basic
+
+#####################################
+################ MFA ################
+#####################################
+
+# If 'true', MFA for an account must be enabled to access the rauthy admin UI (default: true)
+ADMIN_FORCE_MFA=false
+
+# If set to true, you can access rauthy's admin API only with a valid session + CSRF token.
+# If you need some external access via JWT tokens, since sessions are managed with cookies, set this to false.
+# default: true
+ADMIN_ACCESS_SESSION_ONLY=true
+
+#####################################
+############## POW #################
+#####################################
+
+## Proof of Work (PoW) configuration for Client Endpoints like User Registration
+# The iteration count for the PoW calculation (default: 1000000)
+#POW_IT=1000000
+
+# The expiration duration in seconds when a saved PoW should be cleaned up (default: 300)
+#POW_EXP=300
+
+#####################################
+############# SERVER ################
+#####################################
+
+# The server address to listen on. Can bind to a specific IP. (default: 0.0.0.0)
+#LISTEN_ADDRESS=0.0.0.0
+
+# The listen ports for HTTP / HTTPS, depending on the activated 'LISTEN_SCHEME'
+# default: 8080
+#LISTEN_PORT_HTTP=8080
+# default: 8443
+#LISTEN_PORT_HTTPS=8443
+
+# The scheme to use locally, valid values: http | https | http_https (default: http_https)
+LISTEN_SCHEME=http
+
+# The Public URL of the whole deployment
+# The LISTEN_SCHEME + PUB_URL must match the HTTP ORIGIN HEADER later on, which is especially important when running
+# rauthy behind a reverse proxy. In case of a non-standard port (80/443), you need to add the port to the PUB_URL
+PUB_URL=localhost:8080
+
+# default value: number of available physical cores
+#HTTP_WORKERS=1
+
+# When rauthy is running behind a reverse proxy, set to true (default: false)
+PROXY_MODE=false
+
+#####################################
+############### TLS #################
+#####################################
+
+## Rauthy TLS
+
+# Overwrite the path to the TLS certificate file in PEM format for rauthy (default: tls/tls.crt)
+#TLS_CERT=tls/tls.crt
+# Overwrite the path to the TLS private key file in PEM format for rauthy.
+# If the path / filename ends with '.der', rauthy will parse it as DER, otherwise as PEM.
+# (default: tls/tls.key)
+#TLS_KEY=tls/tls.key
+
+## CACHE TLS
+
+# Enable / disable TLS for the cache communication (default: true)
+CACHE_TLS=true
+# The path to the server TLS certificate PEM file (default: tls/redhac.local.cert.pem)
+CACHE_TLS_SERVER_CERT=tls/redhac.local.cert.pem
+# The path to the server TLS key PEM file (default: tls/redhac.local.key.pem)
+CACHE_TLS_SERVER_KEY=tls/redhac.local.key.pem
+# If not empty, the PEM file from the specified location will be added as the CA certificate chain for validating
+# the servers TLS certificate (default: tls/ca-chain.cert.pem)
+CACHE_TLS_CA_SERVER=tls/ca-chain.cert.pem
+
+# The path to the client mTLS certificate PEM file (default: tls/redhac.local.cert.pem)
+CACHE_TLS_CLIENT_CERT=tls/redhac.local.cert.pem
+# The path to the client mTLS key PEM file (default: tls/redhac.local.key.pem)
+CACHE_TLS_CLIENT_KEY=tls/redhac.local.key.pem
+# If not empty, the PEM file from the specified location will be added as the CA certificate chain for validating
+# the clients mTLS certificate (default: tls/ca-chain.cert.pem)
+CACHE_TLS_CA_CLIENT=tls/ca-chain.cert.pem
+
+# The domain / CN the client should validate the certificate against. This domain MUST be inside the
+# 'X509v3 Subject Alternative Name' when you take a look at the servers certificate with the openssl tool.
+# default: redhac.local
+CACHE_TLS_CLIENT_VALIDATE_DOMAIN=redhac.local
+
+# Can be used, if you need to overwrite the SNI when the client connects to the server, for instance if you are behind
+# a loadbalancer which combines multiple certificates. (default: "")
+#CACHE_TLS_SNI_OVERWRITE=
+
+#####################################
+############# WEBAUTHN ##############
+#####################################
+
+# The 'Relaying Party (RP) ID' - effective domain name (default: localhost)
+# CAUTION: When this changes, already registered devices will stop working and users cannot log in anymore!
+RP_ID=localhost
+
+# Url containing the effective domain name (default: http://localhost:8080)
+# CAUTION: Must include the port number!
+RP_ORIGIN=http://localhost:8080
+
+# Non critical RP Name
+# Has no security properties and may be changed without issues (default: Rauthy Webauthn)
+RP_NAME='Rauthy Webauthn'
+
+# The Cache lifetime in seconds for Webauthn requests. Within this time, a webauthn request must have been validated.
+# (default: 60)
+#WEBAUTHN_REQ_EXP=60
+
+# The Cache lifetime for additional Webauthn Data like auth codes and so on. Should not be lower than WEBAUTHN_REQ_EXP.
+# The value is in seconds (default: 90)
+#WEBAUTHN_DATA_EXP=90
+
+# With webauthn enabled for a user, he needs to enter username / password on a new system. If these credentials are
+# verified, rauthy will set an additional cookie, which will determine how long the user can then use only (safe)
+# MFA passwordless webauthn login with yubikeys, apple touch id, windows hello, ... until he needs to verify his
+# credentials again.
+# Passwordless login is generally much safer than logging in with a password. But sometimes it is possible, that the
+# Webauthn devices do not force the user to include a second factor, which in that case would be a single factor login
+# again. That is why we should ask for the original password in addition once in a while to set the cookie.
+# The value is in hours (default: 2160)
+#WEBAUTHN_RENEW_EXP=2160
+
+
+
+ You can migrate easily between SQLite and Postgres, or just between different instances of them.
+Let's say you started out by evaluating Rauthy with a SQLite and a single instance deployment. Later on, you want to +migrate to a HA setup, which requires you to use a Postgres.
+Solution: MIGRATE_DB_FROM
If you set the MIGRATE_DB_FROM
in Rauthy's config, it will perform a migration at the next restart.
+The way it works is the following:
MIGRATE_DB_FROM
is configuredDATABASE_URL
DATABASE_URL
with the data from the MIGRATE_DB_FROM
databaseMIGRATE_DB_FROM
DATABASE_URL
as the new database and start normal operationMIGRATE_DB_FROM
overwrites any data in the target database! Be very careful with this option.
If you do not remove the MIGRATE_DB_FROM
after the migration has been done, it will overwrite the target again with
+the next restart of the application. Remove the config variable immediately after the migration has finished.
The easiest to do is to just set MIGRATE_DB_FROM
as an environmant variable, which is easier and quicker to remove
+again afterwards.
In the Getting Started, we have set up the ENC_KEYS
and ENC_KEY_ACTIVE
.
The ENC_KEYS
defines the static keys used for additional data encryption in a few places. This values may contain
+multiple keys, if you want to rotate them at some point without breaking the decryption of all already existing secrets.
ENC_KEY_ACTIVE
defines the key inside ENC_KEYS
which will be used as the default. This means that all new / current
+encryptions performed by the backend will use the key with the given ID.
1. Add a new key to the ENC_KEYS
in you secrets
You must not remove a current key, before the migration has been done via the UI.
+If the old key is gone, the migration will fail.
2. Generate a new key + id
+echo "$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c8)/$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c32)"
+
+Keep in mind, you need to ADD this to your existing keys and not just replace them! If you just replace them, almost +all things will break and fall apart.
+The final format of the ENC_KEYS
should look something like this, for instance:
ENC_KEYS="bVCyTsGaggVy5yqQ/S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4 q6u26onRvXVG4427/3CEC8RJWBcMkrBMkRXgx65AmJsNTghSA"
+
+If you are inside Kubernetes and (hopefully) use a Kubernetes secret for this, you need to base64 encode the whole
+value: echo 'PutTheWholeValueHere' | base64
3. Set the ENC_KEY_ACTIVE
to the ID of your newly generated key
This will make sure, that all new encryptions will use the new key. If you do not care about removing the old keys,
+because you maybe just want to rotate because its good practice, the secrets will migrate "by themselves" over time.
+If Rauthy finds any secrets during its normal operation, that have been encrypted with an older key than the current
+ENC_KEY_ACTIVE
, it will re-encrypt these secrets and update the values.
+This means, you may just stop at this point, if this is good enough for you.
4. Migrate Keys
+If you however want to trigger a re-encryption of all existing secrets on purpose, there is a small tool in the +Admin UI which helps you with this.
+Log in to the Admin UI and navigate to Config
-> Encryption Keys
.
+You will see the currently recognized keys and the currently active ID.
You can then make sure, that the ID you want to migrate secrets to is selected and execute the migrations.
+Please keep in mind, that if you have a lot of data, it might take a few seconds to perform this operation.
+This will migrate all encrypted data for existing OIDC clients and all JWKs used for JWT token singing with the new
+key.
5. Remove old keys
+After a successful migration via the UI tool, you may remove old keys from the ENC_KEYS
value.
The MFA cookies, which are set for a client with an active security after a successful login, are encrypted with the
+ENC_KEY_ACTIVE
too. This means, if you remove something from the ENC_KEYS
which was used to encrypt one of these
+MFA cookies, the user will be prompted for the password again, even if the cookie has not expired yet.
You should use FIDO 2 in production for 2FA / MFA. +To make sure it works, you need to check your the config.
+Set / check some variables in your config, to make sure it works correctly.
+RP_ID
This usually is the 'Relaying Party (RP) ID', which should be your effective domain name. +For the above example, since our application is available under 'auth.example.com', this should also be:
+RP_ID=auth.example.com
+
+When the RP_ID
changes, already registered devices will stop working and users cannot log in anymore!
+Be very careful, if you want / need to do this in production.
RP_ORIGIN
The seconds important variable for FIDO 2 is the RP_ORIGIN
. This needs to be set to the URL containing the effective
+domain name.
The RP_ORIGIN
must always include the port number, even it is just the default 443 for HTTPS.
In this example, assuming rauthy will be available at port 443, correct would be:
+RP_ORIGIN=https://auth.example.com:443
+
+RP_NAME
This variable can be set to anything "nice".
+This may be shown to the user in a way like "RP_BNAE
requests your security key ...". If this is shown depends on the
+OS and the browser the client uses. Firefox, for instance, does not show this at the time of writing.
You can change the RP_NAME
later on without affecting the validation of already registered keys.
WEBAUTHN_RENEW_EXP
In my opinion, passwordless login with WebAuthn is the best thing for the user experience and the safest too.
+However, not all operating systems and browsers have caught up fully yet until a point, where I would use only WebAuthn +on its own. Firefox for instance is a good example. On Linux and Mac OS, it does not work with a PIN or any other second +factor on your device, which basically downgrades the login to a (strong) single factor again.
+For this reason, Rauthy will always prompt a user at least once for the password on a new machine, even with active +security keys. The keys are used either as a strong second factor, when they do not work with a PIN, or bump up the whole +login to real MFA, if the OS / Browser / Key does support this.
+When a user as logged in successfully on a new device and active 2FA / MFA, he will get an encrypted cookie.
+The lifetime of this cookie can be configured with WEBAUTHN_RENEW_EXP
.
+The default of this value is 2160 hours.
As long as this cookie is present, non-expired and can be encrypted by the backend, the user can log in from this very +device with his FIDO 2 key only, which makes a very good user experience for the whole login flow. The E-Mail will +already be filled automatically and only a single click on the login button is necessary.
+ +Rauthy is capable of running in a High Availability Mode (HA).
+Some values, like authentication codes for instance, do live in the cache only. Additionally, there might come an +option with a future version which offers a special in-memory only mode in some situations.
+Because of this, all instances create and share a single HA cache layer, which means at the same time, that you cannot +just scale up the replicas infinitely. The optimal amount of replicas for a HA mode would be 3, or if you need even higher +resilience 5. More replicas should work just fine, but this has never been really tested and the performance will +degrade at some point.
+To achieve the HA caching layer embedded directly into the application, I created a library (or crate in Rust terms)
+called redhac
.
+This crate will create each a gRPC server and a client part and each node will connect to all other ones. Once quorum
+has been reached, a leader will be elected, which then will execute all insert requests by default to avoid overlaps
+or inconsistencies and to guarantee a configured level of safety. Different so called AckLevel
are available, like
+Quorum
, Once
and Leader
in addition to a direct cache put without any safeties.
+Rauthy uses different levels in different situations to provide real HA and sync all caches between the pods. This
+means that you can loose a pod and still have the in-cache-only values available on the other ones.
This syncing of the cache is the reason why write performance will degrade, if you scale up too many replicas, which should +not really be necessary anyway. The best HA performance will be achieved with 3 replicas and then scaling up the +resources for each pod before adding more replicas.
+The way to configure the HA_MODE
is optimized for a Kubernetes deployment but may seem a bit odd at the same time,
+if you deploy somewhere else. You need to the following values in the config file:
HA_MODE
The first one is easy, just set HA_MODE=true
HA_HOSTS
The HA_HOSTS
is working in a way, that it is really easy inside Kubernetes to configure it, as long as a
+StatefulSet is used for the deployment.
The way a cache node finds its members is by the HA_HOSTS
and its own HOSTNAME
.
+In the HA_HOSTS
, add every cache member. For instance, if you want to use 3 replicas in HA mode which are running
+and are deployed as a StatefulSet with the name rauthy
again:
HA_HOSTS="http://rauthy-0:8000, http://rauthy-1:8000 ,http://rauthy-2:8000"
+
+The way it works:
+rauthy
, the replicas will always have the names rauthy-0
, rauthy-1
, ..., which are at the same time the
+hostnames inside the pod.HA_HOSTS
variableHA_HOSTS
, the application will panic and exit because of a misconfiguration.HA_HOSTS
, take the leftover nodes as all cache members and connect to themIf you are in an environment where the described mechanism with extracting the hostname would not work, you can set
+the HOSTNAME_OVERWRITE
for each instance to match one of the HA_HOSTS
entries.
CACHE_AUTH_TOKEN
You need to set a secret for the CACHE_AUTH_TOKEN
which was left out in the
+Getting Started
Just create a secret and add it in the same way:
+echo "$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c48)" | base64
+
+If you are using a service mesh like for instance linkerd which creates mTLS connections between +all pods by default, you can use the HA cache with just plain HTTP, since linkerd will encapsulate the traffic anyway.
+You may then set
+CACHE_TLS=false
+
+to disable the use of TLS certificates between cache member.
+However, if you do not have encryption between pods by default, I would highly recommend, that you use TLS.
+ +If you do have TLS certificates from another source already, skip directly to Config.
+The tools provided in the rauthy repository are very basic and have a terrible user experience.
+They should only be used, if you do not have an already existing TLS setup or workflow.
+A project specifically tailored to TLS CA and certificates is in the making.
As mentioned, the tools are very basic. If you for instance type in a bad password during CA / intermediate generation,
+they will just throw an error, and you need to clean up again. They should only get you started and be used for testing.
+There are a lot of good tools out there which can get you started with TLS and there is no real benefit in creating
+just another one that does the same stuff.
The scripts can be found here (TODO add link to tools).
+You need to have openssl
and a BASH shell available on your system. They have not been tested with Windows.
The cache layer does validate the CA for mTLS connections, which is why you can generate a full set of certificates.
+1. Certificate Authority (CA)
+./build_ca.sh
+
+and enter an at least 4 character password for the private key file for the CA 3 times.
+2. Intermediate CA
+./build_intermediate.sh
+
+and enter an at least 4 character password for the private key file for the CA 3 times.
+y
2 times to verify the signing of the new certificateintermediate/certs/intermediate.cert.pem: OK
as the last line3. End Entity Certificates
+These are the certificates used by the cache, rauthy itself, or any other server / client.
./build_end_entity.sh redhac.local
+
+intermediate/certs/redhac.local.cert.pem: OK
as the last line ./build_end_entity.sh auth.example.com
+
+../
folder:auth.example.com.cert.pem
+auth.example.com.key.pem
+ca-chain.pem
+redhac.local.cert.pem
+redhac.local.key.pem
+
+This is not a tutorial about TLS certificates.
+As mentioned above already, another dedicated TLS project is in the making.
The reference config contains a TLS
section with all the values you can set.
The cache layer (optionally) creates an mTLS connection and validates client certificates, if they are configured.
+To enable TLS at all, set
CACHE_TLS=true
+
+By default, redhac
expects certificates to be in /app/tls/
with the common name / SNI redhac.local
.
+The certificates need to be in the PEM format and you can provide different certificates for the server and client part,
+if you like.
If this differs from your setup, you can set the following config variables:
+# The path to the server TLS certificate PEM file (default: tls/redhac.local.cert.pem)
+CACHE_TLS_SERVER_CERT=tls/redhac.local.cert.pem
+# The path to the server TLS key PEM file (default: tls/redhac.local.key.pem)
+CACHE_TLS_SERVER_KEY=tls/redhac.local.key.pem
+# If not empty, the PEM file from the specified location will be added as the CA certificate chain for validating
+# the servers TLS certificate (default: tls/ca-chain.cert.pem)
+CACHE_TLS_CA_SERVER=tls/ca-chain.cert.pem
+
+# The path to the client mTLS certificate PEM file (default: tls/redhac.local.cert.pem)
+CACHE_TLS_CLIENT_CERT=tls/redhac.local.cert.pem
+# The path to the client mTLS key PEM file (default: tls/redhac.local.key.pem)
+CACHE_TLS_CLIENT_KEY=tls/redhac.local.key.pem
+# If not empty, the PEM file from the specified location will be added as the CA certificate chain for validating
+# the clients mTLS certificate (default: tls/ca-chain.cert.pem)
+CACHE_TLS_CA_CLIENT=tls/ca-chain.cert.pem
+
+# The domain / CN the client should validate the certificate against. This domain MUST be inside the
+# 'X509v3 Subject Alternative Name' when you take a look at the servers certificate with the openssl tool.
+# default: redhac.local
+CACHE_TLS_CLIENT_VALIDATE_DOMAIN=redhac.local
+
+# Can be used, if you need to overwrite the SNI when the client connects to the server, for instance if you are behind
+# a loadbalancer which combines multiple certificates. (default: "")
+#CACHE_TLS_SNI_OVERWRITE=
+
+The TLS configuration for the REST API is much simpler.
+By default, rauthy will expect a certificate and a key file in /app/tls/tls.key
and /app/tls/tls.crt
, which is the
+default naming for a Kubernetes TLS secret. The expected format is PEM, but you could provide the key in DER format too,
+if you rename the file-ending to *.der
.
You can change the default path for the files with the config variables TLS_CERT
and TLS_KEY
.
If you did not follow the above procedure to generate the CA and certificates, you may need to rename the files in the +following command, to create the Kubernetes secrets.
+Secrets - REST API
+kubectl -n rauthy create secret tls rauthy-tls --key="../auth.example.com.key.pem" --cert="../auth.example.com.cert.pem" && \
+
+Secrets - redhac
cache
kubectl -n rauthy create secret tls redhac-tls-server --key="../redhac.local.key.pem" --cert="../redhac.local.cert.pem" && \
+kubectl -n rauthy create secret generic redhac-server-ca --from-file ../ca-chain.pem
+
+We need to configure the newly created Kubernetes secrets in the std.yaml
from the
+Kubernetes setup.
spec.template.spec.volumes
section, we need to mount the volumes from secrets:REST API:
+- name: rauthy-tls
+ secret:
+ secretName: rauthy-tls
+
+redhac
cache:
- name: redhac-tls-server
+ secret:
+ secretName: redhac-tls-server
+- name: redhac-server-ca
+ secret:
+ secretName: redhac-server-ca
+
+spec.template.spec.containers.[rauthy].volumeMounts
section, add::REST API:
+- mountPath: /app/tls/
+ name: rauthy-tls
+ readOnly: true
+
+redhac
cache:
- mountPath: /app/tls/redhac/certs
+ name: redhac-tls-server
+ readOnly: true
+- mountPath: /app/tls/redhac/ca
+ name: redhac-server-ca
+ readOnly: true
+
+After having modified the config from above and the sts.yaml
now, just apply both:
kubectl apply -f config.yaml
+kubectl apply -f sts.yaml
+
+The rauthy
pods should restart now and TLS is configured.
For getting a first look at rauthy, you can start it with docker (or any other container runtime) on your localhost. +The image contains a basic default config which is sufficient for local testing.
+docker run -i --rm \
+ -p 8080:8080 \
+ --name rauthy \
+ sdobedev/rauthy
+
+This will start the container in interactive mode with an in-memory SQLite database.
+If you want to test a bit more in depth, you can change to an on-disk database easily:
+docker run -d \
+ -e DATABASE_URL=sqlite:data/rauthy.db \
+ -p 8080:8080 \
+ --name rauthy \
+ sdobedev/rauthy
+
+The second command does not start in interactive mode and it does not delete the container on exit.
+This means the data will be persisted, as long as the container itself is no erased and you can shutdown and
+restart to your liking without using test data.
To see the logs and the new admin password, take a look with
+docker logs -f rauthy
+
+To delete the container, if you do not need it anymore, execute
+docker stop rauthy && docker rm rauthy
+
+To proceed, go to First Start
+For going to production or to test more in-depth, you need to apply a config that matches your environment.
+The first thing you might want to do is to add a volume mount for the database.
+The second thing is to provide a more custom config.
Rauthy can either be configured via environment variables only, or you can provide a config file.
+You can add environment variables to the startup command with the -e
option, like shown in the on-disk SQLite
+command.
+A better approach, when you have a bigger config file, would be to have all of them in a config file.
The following commands will work on Linux and Mac OS (even though not tested on Mac OS). If you are on Windows,
+you might replace the pwd
command and just paste in the path directly. Since I am no Windows user myself, I
+cannot provide tested commands in this case.
1. We want to create a new directory for rauthy's persistent data
+mkdir rauthy
+
+2. Add the new config file.
+This documentation is in an early version and remote links are not available yet, they will be added at a later
+point. For now, create a new file and paste the reference config
vim rauthy/rauthy.cfg
+
+3. Create a sub-directory for the Database files
+mkdir rauthy/data
+
+The rauthy container by default runs everything with user:group 10001:10001 for security reasons.
+To make this work with the default values, you have 2 options:
chmod 0640 rauthy/rauthy.cfg
+chmod 0700 -R rauthy/data
+sudo chown -R 10001:10001 rauthy
+
+chmod a+w rauthy/data
+
+This will make the directory writeable for everyone, so rauthy can create the database files inside the container +with 10001:10001 again.
+The safest approach would be to change the owner and group for these files on the host system. This needs sudo
+to edit the config, which may be a bit annoying, but at the same time it makes sure, that you can only read
+the secrets inside it with sudo
too.
4. Adopt the config to your liking.
+Make sure to adjust the volume mount for the sqlite directory in step 5, if it differs from sqlite:data/rauthy.db
5. Start the container with volume mounts
+docker run -d \
+ -v $(pwd)/rauthy/rauthy.cfg:/app/rauthy.cfg \
+ -v $(pwd)/rauthy/data:/app/data \
+ -p 8080:8080 \
+ --name rauthy \
+ sdobedev/rauthy
+
+6. Restrict DB files access even more
+After rauthy has done the first start, you could harden the access rights of the SQLite files even more.
+This would make sure, that no one without sudo
could just copy and read in the SQLite in some other place.
+Just execute once more:
sudo chmod 0700 -R rauthy/data
+
+7. You can now proceed with the First Start steps.
+ +With the very first start of rauthy, or better with an empty database, when rauthy is starting, it does not only +create all the necessary schemas and initial data, but also some sensitive information will be generated safely. +This includes a set of Json Web Keys (JWKS) for the token signing and some secrets.
+The most important of these newly generated secrets is the default admin user's password.
+When this is securely generated with the very first start, it will be logged into the console. This will only
+happen once and never again.
docker logs -f rauthy
+
+kubectl -n rauthy logs -f rauthy-0
+
+If you do a Kubernets HA deployment directly, only the Pod rauthy-0
will log the initial password.
If you missed this log entry, you will not be able to log in.
+If this is the case, you can delete the database / volume and just restart rauthy.
The log message contains a link to the accounts page, where you then should log in to immediately set a new password.
+Follow the link, use as the default admin admin@localhost.de
and as password the copied value from the log.
EDIT
and CHANGE PASSWORD
to set a new passwordIt is a good idea, to either keep the admin@localhost.de
as a fallback user with just a very long password, or
+disable it, after a custom admin has been added.
When logged in to the admin UI, you can add a new user. When the SMTP
settings are correctly configured in the config,
+which we can test right now, you will receive an E-Mail with the very first password reset.
If you do not receive an E-Mail after the first user registration, chances are you may have a problem with the SMTP
+setup.
+To debug this, you can set LOG_LEVEL=debug
in the config and then watch the logs after a restart.
rauthy_admin
user roleThe role, which allows a user to access the admin UI, is the rauthy_admin
.
+If the user has this role assigned, he will be seen as an admin.
Under the hood, rauthy itself uses the OIDC roles and groups in the same way, as all clients would do. This means you
+should not neither delete the rauthy
default client, nor the rauthy_admin
role. There are mechanisms to prevents
+this happening by accident via UI, but you could possibly do this via a direct API call.
+There are some anti-lockout mechanisms in place in the backend, which will be executed with every start, but being
+careful at this point is a good idea anyway.
At the time of writing, there is no Helm Chart or Kustomize files available yet. The whole setup is pretty simple +on purpose though, so it should not be a big deal to get it running inside Kubernetes.
+Since rauthy uses pretty aggressive caching for different reasons, you cannot just have a single deployment and
+scale up the replicas without enabling HA_MODE
. How to deploy a HA version is described below.
The steps to deploy on Kubernetes are pretty simple.
+For the purpose of this documentation, we assume that rauthy will be deployed in the rauthy
namespace.
+If this is not the case for you, change the following commands accordingly.
kubectl create ns rauthy
+
+This documentation will manage the Kubernetes files in a folder called rauthy
.
mkdir rauthy && cd rauthy
+
+Create the config file, paste the reference config and adjust it to your needs.
+There is no "nice 1-liner" available yet.
echo 'apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: rauthy-config
+ namespace: rauthy
+data:
+ rauthy.cfg: |+
+ PASTE CONFIG HERE - WATCH THE INDENTATION' > config.yaml
+
+Open the config with your favorite editor and paste the reference config in place.
+Make sure to watch the indentation.
Do not include sensitive information like for instance the ENC_KEYS inside the normal Config.
+Use the secrets from the next step for this.
+If you use SQLite, you can include the DATABASE_URL in the config, since it does not contain a password, but
+never do this for Postgres!
touch secrets.yaml
+
+Paste the following content into the secrets.yaml
file:
apiVersion: v1
+kind: Secret
+metadata:
+ name: rauthy-secrets
+ namespace: rauthy
+type: Opaque
+data:
+ # The CACHE_AUTH_TOKEN is only needed for a deployment with HA_MODE == true
+ # Secret token, which is used to authenticate the cache members
+ #CACHE_AUTH_TOKEN:
+
+ # The database driver will be chosen at runtime depending on the given DATABASE_URL format. Examples:
+ # Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:'
+ # Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName'
+ DATABASE_URL:
+
+ # Format: "key_id/enc_key another_key_id/another_enc_key" - the enc_key itself must be exactly 32 characters long and
+ # and should not contain special characters.
+ # The ID must match '[a-zA-Z0-9]{2,20}'
+ ENC_KEYS:
+
+ # Needed for sending E-Mails for password resets and so on
+ SMTP_PASSWORD:
+
+The secrets need to be base64 encoded. If you are on Linux, you can do this in the shell easily. If not, use
+any tool you like.
+Make sure that things like CACHE_AUTH_TOKEN
(only needed with HA_MODE == true
) and ENC_KEYS
are generated
+in a secure random way.
The DATABASE_URL
with SQLite, like used in this example, does not contain sensitive information, but we will
+create it as a secret anyway to have an easier optional migration to postgres later on.
echo 'sqlite:data/rauthy.db' | base64
+
+Generate a new encryption key with ID in the correct format.
+echo "$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c8)/$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c32)" | base64
+
+Paste the base64 String in the secrets for ENC_KEYS
.
+To extract the ENC_KEY_ID
, which needs to be added to the config from Step 2:
echo PasteTheGeneratedBase64Here | base64 -d | cut -d/ -f1
+
+And finally, the SMTP_PASSWORD
echo 'PasteYourSMTPPasswordHere' | base64
+
+Paste all the generated secrets into the secrets.yaml
file and the ENC_KEY_ID
into the config.yaml
from the step
+before.
touch sts.yaml
+
+Paste the following content into the sts.yaml
file:
apiVersion: v1
+kind: Service
+metadata:
+ name: rauthy
+ namespace: rauthy
+spec:
+ selector:
+ app: rauthy
+ ports:
+ # Assuming that this example file will run behind a Kubernetes ingress and does use HTTP internally.
+ - name: http
+ port: 8080
+ targetPort: 8080
+ # Uncomment, if you change to direct HTTPS without a reverse proxy
+ #- name: https
+ # port: 8443
+ # targetPort: 8443
+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: rauthy
+ namespace: rauthy
+ labels:
+ app: rauthy
+spec:
+ serviceName: rauthy
+ # Do not just scale up replicas without a proper HA Setup
+ replicas: 1
+ selector:
+ matchLabels:
+ app: rauthy
+ template:
+ metadata:
+ labels:
+ app: rauthy
+ spec:
+ securityContext:
+ fsGroup: 10001
+ containers:
+ - name: rauthy
+ image: registry.netitservices.com/sd/rauthy:0.12.0-beta1
+ imagePullPolicy: IfNotPresent
+ securityContext:
+ runAsUser: 10001
+ runAsGroup: 10001
+ allowPrivilegeEscalation: false
+ ports:
+ # You may need to adjust this, if you decide to start in https only mode or use another port
+ - containerPort: 8080
+ #- containerPort: 8443
+ env:
+ - name: DATABASE_URL
+ valueFrom:
+ secretKeyRef:
+ name: rauthy-secrets
+ key: DATABASE_URL
+ - name: ENC_KEYS
+ valueFrom:
+ secretKeyRef:
+ name: rauthy-secrets
+ key: ENC_KEYS
+ - name: SMTP_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: rauthy-secrets
+ key: SMTP_PASSWORD
+ volumeMounts:
+ - mountPath: /app/data
+ name: rauthy-data
+ readOnly: false
+ - mountPath: /app/rauthy.cfg
+ subPath: rauthy.cfg
+ name: rauthy-config
+ readOnly: true
+ #- mountPath: /app/tls/
+ # name: rauthy-tls
+ # readOnly: true
+ readinessProbe:
+ httpGet:
+ # You may need to adjust this, if you decide to start in https only mode or use another port
+ scheme: HTTP
+ port: 8080
+ #scheme: HTTPS
+ #port: 8443
+ path: /auth/v1/ping
+ initialDelaySeconds: 1
+ periodSeconds: 10
+ livenessProbe:
+ httpGet:
+ # You may need to adjust this, if you decide to start in https only mode or use another port
+ scheme: HTTP
+ port: 8080
+ #scheme: HTTPS
+ #port: 8443
+ path: /auth/v1/health
+ initialDelaySeconds: 1
+ periodSeconds: 30
+ resources:
+ requests:
+ # Tune the memory requests value carefully. Make sure, that the pods request at least:
+ # `ARGON2_M_COST` / 1024 * `MAX_HASH_THREADS` Mi
+ # With SQLite: for small deployments, add additional ~20-30Mi for "the rest",
+ # for larger ones ~50-70 Mi should be enough.
+ memory: 64Mi
+ # The CPU needs to be adjusted during runtime. This heavily depends on your use case.
+ cpu: 100m
+ limits:
+ # You can limit memory, but you don't really need to, since it should never exceed the expected amount.
+ #memory:
+ # A CPU limit makes sense in case of DDoS attacks or something like this, if you do not have external
+ # rate limiting or other mechanisms. Otherwise, `MAX_HASH_THREADS` is the main mechanism to limit resources.
+ cpu: 1000m
+ volumes:
+ - name: rauthy-config
+ configMap:
+ name: rauthy-config
+ #- name: rauthy-tls
+ # secret:
+ # secretName: rauthy-tls
+ #imagePullSecrets:
+ # - name: provideSecretIfUsingPrivateRegistry
+ volumeClaimTemplates:
+ - metadata:
+ name: rauthy-data
+ spec:
+ accessModes:
+ - "ReadWriteOnce"
+ resources:
+ requests:
+ storage: 128Mi
+ #storageClassName: provideIfYouHaveMultipleOnes
+
+This example assumes, that the deployment will run behind a Kubernetes ingress resource of your choice.
+It uses Traefik with the IngressRoute
CRD.
+Nevertheless, the ingress is really simple, and it should be very easy to adopt anything else.
Create the ingress.yaml
touch ingress.yaml
+
+Paste the following content into the ingress.yaml
file and adjust to your needs
apiVersion: traefik.containo.us/v1alpha1
+kind: IngressRoute
+metadata:
+ name: rauthy-https
+ namespace: rauthy
+spec:
+ entryPoints:
+ - websecure
+ routes:
+ - match: Host(`auth.example.com`)
+ kind: Rule
+ services:
+ - name: rauthy
+ port: 8080
+
+We are now ready to deploy:
+kubectl apply -f .
+
+And then to observe the deployment:
+kubectl -n rauthy get pod -w
+
+You can now proceed with the First Start steps.
+Going to production does not need too many additional steps.
+The thing you need will be valid TLS certificates, of course. To get these, there are a lot of existing mechanisms.
+If you use an internal Certificate Authority (CA), you do have you own tools to work with this anyway. If, however, you
+want to use something like Let's Encrypt, I suggest to use the
+cert-manager, which is easy and straight forward to use.
An example, how to add a certificate for the Traefik IngressRoute from above:
+apiVersion: traefik.containo.us/v1alpha1
+kind: IngressRoute
+metadata:
+ name: rauthy-https
+ namespace: rauthy
+spec:
+ entryPoints:
+ - websecure
+ tls:
+ # Paste the name of the TLS secret here
+ secretName: secret-name-of-your-tls-certificate
+ routes:
+ - match: Host(`auth.example.com`)
+ kind: Rule
+ services:
+ - name: rauthy
+ port: 8080
+
+You may want to add an HTTPS redirect as well:
+apiVersion: traefik.containo.us/v1alpha1
+kind: Middleware
+metadata:
+ name: https-only
+ namespace: rauthy
+spec:
+ redirectScheme:
+ scheme: https
+ permanent: true
+---
+apiVersion: traefik.containo.us/v1alpha1
+kind: IngressRoute
+metadata:
+ name: rauthy-https
+ namespace: rauthy
+spec:
+ entryPoints:
+ - web
+ routes:
+ - match: Host(`auth.example.com`)
+ kind: Rule
+ middlewares:
+ - name: https-only
+ services:
+ - name: rauthy
+ port: 8080
+
+There are a few more things to do when going into production, but these are the same for Kubernetes and Docker and will +be explained in later chapters.
+You can now proceed with the First Start steps.
+ +At the time of writing, you can run Rauthy either with Docker or inside Kubernetes.
+ +There has not been any third party security audit for this project.
+Use this software at your own risk!
This project is currently pre v1.0, which means, even though it is not expected, breaking changes might come +with new versions.
+Rauthy is an OpenID Connect (OIDC) Provider and Single Sign-On solution written in Rust.
+Secure by default
+It tries to be as secure as possible by default while still providing all the options needed to be compatible with
+older systems. For instance, if you create a new OIDC client, it activates ed25519
as the default algorithm for
+token signing and S256 PKCE flow. This will not work with old clients, which do not support it, but you can of course
+deactivate this to your liking.
MFA and Passwordless Login
+Rauthy provides FIDO 2 / Webauthn login flows. If you once logged in on a new client with your username + password, you
+will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to
+have a FIDO compliant Passkey being registered for your account.
+The reason why it requests your password on a new host at least once is pretty simple. Even though most browsers have
+full support even for user verification, it is possible that in some scenarios a set PIN oder biometric fingerprint
+reader will just be ignored by some browsers, which would reduce the strong MFA login to only a single factor again.
+As long as the full support does not exist on every device out there, Rauthy will not allow a "Passkey only Login flow"
+for security reasons.
+An Example for a not correctly working combination would be Firefox on Mac OS, Firefox pre v114 on Linux, or almost
+every browser on Android.
Fast and efficient
+The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint
+and being more efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a
+Raspberry Pi. It makes extensive use of caching to be as fast as possible in cases where your database is further
+away or just a bit slower, because it is maybe running on an SBC from an SD card. Most things are even cached
+for several hours (config options will come in the future) and special care has been taken into account in case of cache
+eviction and invalidation.
+For achieving this speed and efficiency, some additional design tradeoffs werde made. For instance, some things you
+configure statically via config file and not dynamically via UI, while most of them are configured once and then never
+touched again.
Highly Available
+Even though it makes extensive use of caching, you can run it in HA mode. It uses its own embedded distributed HA cache
+called redhac, which cares about cache eviction on remote hosts.
+You can choose between a SQLite for single instance deployments and a Postgres, if you need HA. MySQL support might
+come in the future.
Client Branding
+You have a simple way to create some kind of branding or stylized look for the Login page for each client.
+The whole color theme can be changed and each client can have its own custom logo.
+If you need more than that, you would need to clone the project, manually edit the frontend files and then build the
+project from source.
Already in production
+Rauthy is already being used in production, and it works with all typical OIDC clients (so far). It was just not an
+open source project for quite some time.
+Keycloak was a rough inspiration in certain places and if something is working with Keycloak, it does with rauthy
too
+(again, so far).
Since Rauthy is currently pre v1.0, it might be missing some nice to have features. Some of them will never be +implemented (see below), while others might come or are even planned already.
+Currently missing features:
+UI translation
+The Admin UI will never be translated, but a basic translation for the Login and Account page may come.
Rauthy Name Override
+The idea of this feature is, that one may be able to override the Rauthy name in different places like E-Mail
+notifications or the Admin UI. This would make it possible to not confuse external users, when they expect some
+other deployment name. The currently existing client branding feature can be modified for the rauthy client itself
+already, but is only affecting the Login page for now. This might change as well, so it would affect every component.
Rauthy Authenticator MFA App
+Even though things like OTP codes will never be implemented, it is not set in stone yet that there will never be Rauthy's
+own Authenticator App, which then basically acts as a Webauthn Software Authenticator. There are already existing
+solutions out there to serve this purpose.
+In the current version, deprecated artifacts of a first approach for its own Authenticator App do exist, but they will
+be cleaned up in the near future.
Customizable E-Mail templates
+It is unsure, if this feature will come.
OIDC Client
+Rauthy will most probably have the option to be an OIDC Client itself as well. With this feature, you would be able
+to do things like "Login with Github" to Rauthy and then use Rauthy for the extended management and features.
MySQL Support
+At the time of writing it is not clear yet, if MySQL / MariaDB databases will be added.
+The Foundation is there, it just is the case that some specific queries need to be rewritten / added in a few places
+to match the new SQL dialect.
Rauthy does not try to just replicate already existing, great software.
+For instance, if you need way more flexibility regarding federated users, fully customizable login flows and things
+like SAML or LDAP, then you might want to take a look at solutions like Keycloak.
Rauthy wants to do just a few things, but these things good, fast, efficient and secure.
+This means it will never implement: