diff --git a/CHANGELOG.md b/CHANGELOG.md index 752f5f51a..bed7d6002 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -88,14 +88,61 @@ HQL_BACKUP_CRON="0 30 2 * * * *" #HQL_S3_KEY=s3_key #HQL_S3_SECRET=s3_secret +##################################### +############# CLUSTER ############### +##################################### + +# Can be set to 'k8s' to try to split off the node id from the hostname +# when Hiqlite is running as a StatefulSet inside Kubernetes. +#HQL_NODE_ID_FROM=k8s + +# The node id must exist in the nodes and there must always be +# at least a node with ID 1 +# Will be ignored if `HQL_NODE_ID_FROM=k8s` +HQL_NODE_ID=1 + +# All cluster member nodes. +# To make setting the env var easy, the values are separated by `\s` +# while nodes are separated by `\n` +# in the following format: +# +# id addr_raft addr_api +# id addr_raft addr_api +# id addr_raft addr_api +# +HQL_NODES=" +1 localhost:8100 localhost:8200 +" + +# Sets the limit when the Raft will trigger the creation of a new +# state machine snapshot and purge all logs that are included in +# the snapshot. +# Higher values can achieve more throughput in very write heavy +# situations but will end up in more disk usage and longer +# snapshot creations / log purges. +# default: 10000 +#HQL_LOGS_UNTIL_SNAPSHOT=10000 + +# Secrets for Raft internal authentication as well as for the API. +# These must be at least 16 characters long and you should provide +# different ones for both variables. +HQL_SECRET_RAFT=SuperSecureSecret1337 +HQL_SECRET_API=SuperSecureSecret1337 + +# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the +# environment with setting this value to `env`, or parse them from +# a file on disk with `file:path/to/enc/keys/file` +# default: env +#HQL_ENC_KEYS_FROM=env + ##################################### ############ DATABASE ############### ##################################### # Max DB connections for the Postgres pool. # Irrelevant for Hiqlite. -# default: 5 -#DATABASE_MAX_CONN=5 +# default: 20 +#DATABASE_MAX_CONN=20 # If specified, the currently configured Database will be DELETED and # OVERWRITTEN with a migration from the given database with this variable. @@ -125,9 +172,49 @@ HQL_BACKUP_CRON="0 30 2 * * * *" # default: false #HQL_LOG_STATEMENTS=false -# The password for the Hiqlite dashboard as b64 encoded Argon2ID hash. +# The size of the pooled connections for local database reads. +# +# Do not confuse this with a pool size for network databases, as it +# is much more efficient. You can't really translate between them, +# because it depends on many things, but assuming a factor of 10 is +# a good start. This means, if you needed a (read) pool size of 40 +# connections for something like a postgres before, you should start +# at a `read_pool_size` of 4. +# +# Keep in mind that this pool is only used for reads and writes will +# travel through the Raft and have their own dedicated connection. +# +# default: 4 +#HQL_READ_POOL_SIZE=4 + +# Enables immediate flush + sync to disk after each Log Store Batch. +# The situations where you would need this are very rare, and you +# should use it with care. +# +# The default is `false`, and a flush + sync will be done in 200ms +# intervals. Even if the application should crash, the OS will take +# care of flushing left-over buffers to disk and no data will get +# lost. If something worse happens, you might lose the last 200ms +# of commits (on that node, not the whole cluster). This is only +# important to know for single instance deployments. HA nodes will +# sync data from other cluster members after a restart anyway. +# +# The only situation where you might want to enable this option is +# when you are on a host that might lose power out of nowhere, and +# it has no backup battery, or when your OS / disk itself is unstable. +# +# `sync_immediate` will greatly reduce the write throughput and put +# a lot more pressure on the disk. If you have lots of writes, it +# can pretty quickly kill your SSD for instance. +#HQL_SYNC_IMMEDIATE=false + +# The password for the Hiqlite dashboard as Argon2ID hash. # '123SuperMegaSafe' in this example -HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0xOTQ1Nix0PTIscD0xJGQ2RlJDYTBtaS9OUnkvL1RubmZNa0EkVzJMeTQrc1dxZ0FGd0RyQjBZKy9iWjBQUlZlOTdUMURwQkk5QUoxeW1wRQ== +# +# You only need to provide this value if you need to access the +# Hiqlite debugging dashboard for whatever reason. If no password +# hash is given, the dashboard will not be reachable. +#HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0xOTQ1Nix0PTIscD0xJGQ2RlJDYTBtaS9OUnkvL1RubmZNa0EkVzJMeTQrc1dxZ0FGd0RyQjBZKy9iWjBQUlZlOTdUMURwQkk5QUoxeW1wRQ== ``` ##### Migration (Postgres) @@ -218,6 +305,31 @@ and again with each following restart and therefore remove everything that has h [#601](https://github.com/sebadob/rauthy/pull/601) [#603](https://github.com/sebadob/rauthy/pull/603) +#### User Registration - Redirect Hint + +As an additional hardening, the open redirect hint for user registrations has been locked down a bit by default. +If you used this feature before, you should update `Client URI`s via the Admin UI, so all possible `redirect_uri`s +you are using will still be considered valid, or opt-out of the additional hardening. + +``` +# If set to `true`, any validation of the `redirect_uri` provided during +# a user registration will be disabled. +# Clients can use this feature to redirect the user back to their application +# after a successful registration, so instead of ending up in the user +# dashboard, they come back to the client app that initiated the registration. +# +# The given `redirect_uri` will be compared against all registered +# `client_uri`s and will throw an error, if there is no match. However, +# this check will prevent ephemeral clients from using this feature. Only +# if you need it in combination with ephemeral clients, you should +# set this option to `true`. Otherwise it is advised to set the correct +# Client URI in the admin UI. The `redirect_uri` will be allowed if it starts +# with any registered `client_uri`. +# +# default: false +#USER_REG_OPEN_REDIRECT=true +``` + #### Optional User Family Name Since I received quite a few questions and requests regarding the mandatory `family_name` for users, I decided to change diff --git a/README.md b/README.md index 38a5d5346..37185e25e 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ Rauthy is an OpenID Connect (OIDC) Provider and Single Sign-On solution written It tries to be as secure as possible by default while still providing all the options needed to be compatible with older systems. For instance, if you create a new OIDC client, it activates `ed25519` as the default algorithm for -token signing and S256 PKCE flow. This will not work with old clients, which do not support it, but you can of course +token signing and S256 PKCE flow. This will not work with clients, which do not support it, but you can of course deactivate this to your liking. ### MFA and Passwordless Login @@ -24,8 +24,7 @@ deactivate this to your liking. **Option 1:** Password + Security Key (without User Verification): Rauthy provides FIDO 2 / Webauthn login flows. If you once logged in on a new client with your username + password, -you -will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to +you will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to have a FIDO compliant Passkey being registered for your account. **Option 2:** @@ -34,61 +33,63 @@ Rauthy supports Passkey-Only-Accounts: you basically just provide your E-Mail ad your FIDO 2 Passkey. Your account will not even have / need a password. This login flow is restricted though to only those passkeys, that can provide User Verification (UV) to always have at least 2FA security. -**Note:** -Discoverable credentials are discouraged with Rauthy. This means you will need to enter your E-Mail for the login -(which will be auto-filled after the first one), but Rauthy passkeys do not use any storage on your device. For instance -when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even having full support. +> Discoverable credentials are discouraged with Rauthy (for good reason). This means you will need to enter your E-Mail +> for the login (which will be autofilled after the first one), but Rauthy passkeys do not use any storage on your +> device. For instance when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even +> having full support. ### Fast and efficient -The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint -and being more efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a -Raspberry Pi. It makes extensive use of caching to be as fast as possible in cases where your database is further -away or just a bit slower, because it is maybe running on an SBC from an SD card or in the cloud with the lowest storage -bandwidth. Most things are even cached for several hours and special care has been taken into account in case of cache -eviction and invalidation. +The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint and being more +efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a Raspberry Pi. It +makes extensive use of caching for everything used in the authentication chain to be as fast as possible. Most things +are even cached for several hours and special care has been taken into account in case of cache eviction and +invalidation. -A Rauthy deployment with the embedded SQLite, filled caches and a small set of clients and users configured typically -only uses **between 20 and 25 MB of memory**! This is pretty awesome when comparing it to other existing solutions -out there. If a password from a login is hashed, the memory consumption will of course go up way higher than this -depending on your configured Argon2ID parameters, which you got fully under control. +Rauthy comes with two database options: -For achieving the speed and efficiency, some additional design tradeoffs were made. For instance, some things you -configure statically via config file and not dynamically via UI, while most of them are configured once and then never +- with embedded [Hiqlite](https://github.com/sebadob/hiqlite), which is the default setting +- or you can optionally use a Postgres as your database, if you already have an instance running anyway. + +A deployment with the embedded [Hiqlite](https://github.com/sebadob/hiqlite), filled caches / buffers and a small set of +clients and users configured typically settles around 61MB of memory. Using Postgres, it will end up at ~43MB, but then +you have of course your Postgres consuming additional resources. If a password from a login is hashed, the memory +consumption will of course go up way higher than this, depending on your configured Argon2ID parameters. + +For achieving the speed and efficiency, some additional design tradeoffs were made. For instance, some things can only +be statically set via config file and not dynamically via UI, because most of them are configured once and then never touched again. ### Highly Available -Even though it makes extensive use of caching, you can run it in HA mode. It creates its own embedded HA cache -using [Hiqlite](https://github.com/sebadob/hiqlite). You can choose between a *SQLite* for single instance deployments -and a *Postgres*, if you need HA. +Even though it makes extensive use of caching, you can run it in HA mode. [Hiqlite](https://github.com/sebadob/hiqlite) +creates its own embedded HA cache and persistence layer. Such a deployment is possible with +both [Hiqlite](https://github.com/sebadob/hiqlite) and Postgres. ### Admin UI + User Account Dashboard -Unlike many other options, `rauthy` does have an Admin UI which can be used to basically do almost any operation you -might need to administrate the whole application and its users. There is also an account dashboard for each individual -user, where users will get a basic overview over their account and can self-manage som values, password, passkeys, and -so on. +Rauthy does have an Admin UI which can be used to basically do almost any operation you might need to administrate the +whole application and its users. There is also an account dashboard for each individual user, where users will get a +basic overview over their account and can self-manage som values, password, passkeys, and so on. Some Screenshots and further introduction will follow in the future. ### Client Branding -You have a simple way to create some kind of branding or stylized look for the Login page for each client. -The whole color theme can be changed and each client can have its own custom logo. -Additionally, if you modify the branding for the default `rauthy` client, it will not only change the look for the Login -page, but also for the Account and Admin page. +You have a simple way to create a branding or stylized look for the Login page for each client. The whole color theme +can be changed and each client can have its own custom logo. Additionally, if you modify the branding for the default +`rauthy` client, it will not only change the look for the Login page, but also for the Account and Admin page. ### Events and Auditing -Rauthy comes with an Event and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via -E-Mail, Matrix or Slack, depending on the severity and the configured level. You will see them in the Admin UI in real -time, or you can subscribe to the events stream and externally handle them depending on your own business logic. +Rauthy comes with an Event- and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via +E-Mail, Matrix or Slack, depending on the severity and the configured level. You will see them in the Admin UI in +real-time, or you can subscribe to the events stream and externally handle them depending on your own business logic. ### Brute-Force and basic DoS protection -Rauthy has brute force and basic DoS protection for the login endpoint. Your timeout will be artificially delayed after -enough invalid logins. It does auto-balacklist IP's that exceeded too many invalid logins, with automatic -expiry of the blacklisting. You can, if you like, manually blacklist certain IP's as well via the Admin UI. +Rauthy has brute-force and basic DoS protection for the login endpoint. The timeout will be artificially delayed after +enough invalid logins. It auto-blacklists IPs that exceeded too many invalid logins, with automatic expiry of the +blacklisting. You can, if you like, manually blacklist certain IPs as well via the Admin UI. ### IoT Ready @@ -113,16 +114,15 @@ at the exact same time you need to support. Rauthy is already being used in production, and it works with all typical OIDC clients (so far). It was just not an open source project for quite some time. -Keycloak was a rough inspiration in certain places and if something is working with Keycloak, it does with `rauthy` too -(again, so far). ### Features List - [x] Fully working OIDC provider -- [x] SQLite or Postgres as database +- [x] [Hiqlite](https://github.com/sebadob/hiqlite) or Postgres as database - [x] Fast and efficient with minimal footprint -- [x] Highly configurable - [x] Secure default values +- [x] Highly configurable +- [x] High-Availability - [x] True passwordless accounts with E-Mail + Magic Link + Passkey - [x] Dedicated Admin UI - [x] Account dashboard UI for each user with self-service @@ -150,11 +150,9 @@ Keycloak was a rough inspiration in certain places and if something is working w - [x] Optional Force MFA for each individual client - [x] Additional encryption inside the database for the most critical entries - [x] Automatic database backups with configurable retention and auto-cleanup (SQLite only) -- [x] auto-encrypted backups (SQLite) -- [x] Ability to push SQLite backups to S3 storage -- [x] auto-restore SQLite backups from file and s3 -- [x] High-Availability -- [x] HA cache layer with its own (optional) mTLS connection +- [x] auto-encrypted backups ([Hiqlite](https://github.com/sebadob/hiqlite) only) +- [x] Ability to push [Hiqlite](https://github.com/sebadob/hiqlite) backups to S3 storage +- [x] auto-restore [Hiqlite](https://github.com/sebadob/hiqlite) backups from file or s3 - [x] Username enumeration prevention - [x] Login / Password hashing rate limiting - [x] Session client peer IP binding @@ -170,7 +168,8 @@ Keycloak was a rough inspiration in certain places and if something is working w - [x] SwaggerUI documentation - [x] Configurable E-Mail templates for NewPassword + ResetPassword events - [x] Prometheus `/metrics` endpoint on separate port -- [x] No-Setup migrations between different databases (Yes, even between SQLite and Postgres) +- [x] No-Setup migrations between different databases (Yes, even between [Hiqlite](https://github.com/sebadob/hiqlite) + and Postgres) - [x] Can serve a basic `webid` document - [x] Experimental FedCM support @@ -178,11 +177,10 @@ Keycloak was a rough inspiration in certain places and if something is working w This is a non-exhaustive list of currently open TODO's +- [ ] UI overhaul to make it "prettier" in certain places - [ ] Maybe get a nicer Rauthy Logo - [ ] experimental implementation of [dilithium](https://pq-crystals.org/dilithium/) singing algorithm to become quantum safe -- [ ] maybe something like a `rauthy-migrate` project to make migrating an existing user's DB easier -- [ ] UI overhaul to make it "prettier" in certain places ## Getting Started diff --git a/book/src/config/backup.md b/book/src/config/backup.md index a70f9be32..a2115ada3 100644 --- a/book/src/config/backup.md +++ b/book/src/config/backup.md @@ -5,38 +5,38 @@ If you are using Postgres as the main database, Rauthy does not do any backups. There are a lot of way better tools out there to handle this task. -## SQLite +## Hiqlite -If Rauthy is using a SQLite, it does automatic backups, which can be configured with: +If Rauthy is using Hiqlite, it does automatic backups, which can be configured with: ``` -# Cron job for automatic data store backups (default: "0 0 4 * * * *") -# sec min hour day_of_month month day_of_week year -BACKUP_TASK="0 0 4 * * * *" - -# The name for the data store backups. The current timestamp will always be appended automatically. -# default: rauthy-backup- -BACKUP_NAME="rauthy-backup-" - -# All backups older than the specified hours will be cleaned up automatically (default: 720) -BACKUP_RETENTION_LOCAL=24 +# When the auto-backup task should run. +# Accepts cron syntax: +# "sec min hour day_of_month month day_of_week year" +# default: "0 30 2 * * * *" +HQL_BACKUP_CRON="0 30 2 * * * *" + +# Local backups older than the configured days will be cleaned up after +# the backup cron job. +# default: 30 +#HQL_BACKUP_KEEP_DAYS=30 + +# Backups older than the configured days will be cleaned up locally +# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`. +# default: 3 +#HQL_BACKUP_KEEP_DAYS_LOCAL=3 ``` -All these backups are written inside the pod / container into `/app/data/backup`. -The database itself will be saved in `/app/data` by default. +All these backups are written inside the pod / container into `data/state_machine/backups`. This difference makes it possible, that you could add a second volume mount to the container. You then have the database itself on a different disk than the backups, which is the most simple and straight forward -approach to have a basic backup strategy. - -```admonish info -The SQLite backups are done with `VACUUM`, which means you can just use the backups as a normal database again. -This makes it possible, to just use the [Database Migration](./db_migration.md) feature to apply backups very easily. -``` +approach to have a basic backup strategy. However, it is recommended to use S3 for backups, especially for HA +deployments. ## Remote Backups to S3 Storage -SQLite backups can be pushed to an S3 bucket after creation. This way you can keep only very low amount of local +Hiqlite backups can be pushed to an S3 bucket after creation. This way, you can keep only very low amount of local backups and older ones on cheaper object storage. Rauthy has been tested against MinIO and Garage S3 storage and is working fine with both, so I expect and standard S3 @@ -47,19 +47,13 @@ and Rauthy will take care of the rest. All backups pushed to S3 will automatical The configuration is done with the following values: ``` -# The following section will only be taken into account, when -# SQLite is used as the main database. If you use Postgres, you -# should use Postgres native tooling like for instance `pgbackrest` -# to manage your backups. -# If S3 access is configured, your SQLite backups will be encrypted -# and pushed into the configured bucket. -#S3_URL= -#S3_REGION= -#S3_PATH_STYLE=false -#S3_BUCKET=my_s3_bucket_name -#S3_ACCESS_KEY= -#S3_ACCESS_SECRET= -#S3_DANGER_ACCEPT_INVALID_CERTS=false +# Access values for the S3 bucket where backups will be pushed to. +#HQL_S3_URL=https://s3.example.com +#HQL_S3_BUCKET=my_bucket +#HQL_S3_REGION=example +#HQL_S3_PATH_STYLE=true +#HQL_S3_KEY=s3_key +#HQL_S3_SECRET=s3_secret ``` ## Disaster Recovery @@ -67,51 +61,34 @@ The configuration is done with the following values: If you really lost all your data, you can easily restore automatically from the latest backup. This works with either a local `file` backup or with an encrypted remote backup on `s3` storage (as long as you still have the `ENC_KEY_ACTIVE` that has been used for the remote backup). -This, again, works only for SQLite. When you are using Postgres, you really should use native tooling which is way -better at this. +This, again, works only for Hiqlite. When you are using Postgres, you should use Postgres native tooling like +`pgBackRest` which is way better at this. The process is really simple: -- set an environment variable before the start -- start up Rauthy -- check the logs and wait for the backup to be finished -- after a successful restore, Rauthy will start its normal operation +1. Have the cluster shut down. This is probably the case anyway, if you need to restore from a backup. +2. Provide a backup file name on S3 storage with the `HQL_BACKUP_RESTORE` value with prefix `s3:` (encrypted), or a file + on disk (plain sqlite file) with the prefix `file:`. +3. Start up Rauthy +4. Check the logs and wait for the backup to be finished +5. After a successful restore, Rauthy will start its normal operation +6. Make sure to remove the HQL_BACKUP_RESTORE env value. ```admonish danger After a successful restore, you MUST remove the env var again! -If you don't do it, Rauthy will re-apply the same backup with the next restart. +If you don't do it, Rauthy will re-apply the same backup with each following restart over and over again. ``` You only need to set this single value: ``` -# Restores the given backup -# -# CAUTION: Be very careful with this option - it will overwrite -# any existing database! The best way to use this option is to -# provide it as an environment variable for a single start up -# and then remove it directly after success. -# -# This only works when you are using a SQLite database! -# If you are running on Postgres, you must use Postgres-native -# tooling to handle your backups. -# -# You can either restore a local backup, or an encrypted one -# from S3 storage. -# -# For restoring from a local backup, provide the folder name -# of the backup you want to restore. Local SQLite backups are -# always in `./data/backup/rauthy-backup-TIMESTAMP/` folders. -# You only provide the backup folder name itself, in this case -# it would be `rauthy-backup-TIMESTAMP` like this: -# RESTORE_BACKUP=file:rauthy-backup-TIMESTAMP -# -# If you want to restore an encrypted backup from S3 storage, -# you must provide the object name in the configured bucket. -# For instance, let's say we have an object named -# `rauthy-0.20.0-1703243039.cryptr` in our bucket, then the -# format would be: -# RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr -# -#RESTORE_BACKUP= +# If you ever need to restore from a backup, the process is simple. +# 1. Have the cluster shut down. This is probably the case anyway, if +# you need to restore from a backup. +# 2. Provide the backup file name on S3 storage with the +# HQL_BACKUP_RESTORE value. +# 3. Start up the cluster again. +# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE +# env value. +#HQL_BACKUP_RESTORE= ``` diff --git a/book/src/config/config.md b/book/src/config/config.md index 992b2e2de..f086c72b8 100644 --- a/book/src/config/config.md +++ b/book/src/config/config.md @@ -2,10 +2,34 @@ This shows a full example config with (hopefully) every value nicely described. +You can configure a lot here, but the most important variables you most-likely want to change when going into production +are the following. Lines beginning with `!!!` are absolutely critical. The order matches their location in the +reference config below. + +- `OPEN_USER_REG`, `USER_REG_DOMAIN_RESTRICTION` +- `PEER_IP_HEADER_NAME` - when behind a CDN +- `HQL_BACKUP_CRON`, `HQL_BACKUP_KEEP_DAYS`, `HQL_S3_URL`, `HQL_S3_BUCKET`, `HQL_S3_REGION`, `HQL_S3_PATH_STYLE`, + `HQL_S3_KEY`, `HQL_S3_SECRET` - for Hiqlite backups, does not matter when using Postgres +- `BOOTSTRAP_ADMIN_EMAIL` +- `HQL_NODE_ID_FROM` or `HQL_NODE_ID` + `HQL_NODES` - HA only +- !!! `HQL_SECRET_RAFT` + `HQL_SECRET_API` - set even when not using HA +- `DATABASE_URL` + `HIQLITE` - if you want to use Postgres +- `RAUTHY_ADMIN_EMAIL` +- `EMAIL_SUB_PREFIX`, `SMTP_URL`, `SMTP_USERNAME`, `SMTP_PASSWORD`, `SMTP_FROM` +- !!! `ENC_KEY_ACTIVE` + `ENC_KEYS` +- `MAX_HASH_THREADS` +- any target in the `EVENTS / AUDIT` section +- !!! `PUB_URL` +- `PROXY_MODE` + `TRUSTED_PROXIES` +- `TLS_CERT` + `TLS_KEY` - if you don't terminate TLS on your reverse proxy +- `HQL_TLS_RAFT_KEY` + `HQL_TLS_RAFT_CERT`+ `HQL_TLS_API_KEY` + `HQL_TLS_API_CERT` - if you want to internally encrypt + database / cache traffic +- !!! `RP_ID` + `RP_ORIGIN` + `RP_NAME` + ```admonish caution When you go into production, make sure that you provide the included secrets / sensistive information in this -file in an appropriate way. With docker, you can leave them inside this file, but when deploying with Kubernetes, -extract these values, create Kubernetes Secrets and provide them as environment variables. +file in an appropriate way. With docker, you can leave them inside this file (with proper access rights!), but when +deploying with Kubernetes, extract these values into Kubernetes Secrets. ``` ``` @@ -14,8 +38,9 @@ extract these values, create Kubernetes Secrets and provide them as environment ##################################### # If the User Registration endpoint should be accessible by anyone. -# If not, an admin must create each new user. (default: false) -#OPEN_USER_REG=true +# If not, an admin must create each new user. +# default: false +#OPEN_USER_REG=false # If set to true, the `/userinfo` endpoint will do additional validations. # The non-strict mode will fetch the user by id from the `sub` claim and make @@ -51,6 +76,23 @@ extract these values, create Kubernetes Secrets and provide them as environment #evil.net #" +# If set to `true`, any validation of the `redirect_uri` provided during +# a user registration will be disabled. +# Clients can use this feature to redirect the user back to their application +# after a successful registration, so instead of ending up in the user +# dashboard, they come back to the client app that initiated the registration. +# +# The given `redirect_uri` will be compared against all registered +# `client_uri`s and will throw an error, if there is no match. However, +# this check will prevent ephemeral clients from using this feature. Only +# if you need it in combination with ephemeral clients, you should +# set this option to `true`. Otherwise it is advised to set the correct +# Client URI in the admin UI. The `redirect_uri` will be allowed if it starts +# with any registered `client_uri`. +# +# default: false +#USER_REG_OPEN_REDIRECT=true + # If set to true, a violation inside the CSRF protection middleware based # on Sec-* headers will block invalid requests. Usually you always want this # enabled. You may only set it to false during the first testing phase if you @@ -77,7 +119,7 @@ extract these values, create Kubernetes Secrets and provide them as environment # disabled, this feature will not work. You can validate the IPs for each session # in the Admin UI. If these are correct, your setup is okay. # -# (default: true) +# default: true #SESSION_VALIDATE_IP=true # By default, Rauthy will log a warning into the logs, if an active password @@ -99,7 +141,7 @@ extract these values, create Kubernetes Secrets and provide them as environment # Cloudflare, which adds custom headers in this case. # For instance, if your requests are proxied through cloudflare, your would # set `CF-Connecting-IP`. -PEER_IP_HEADER_NAME="CF-Connecting-IP" +#PEER_IP_HEADER_NAME="CF-Connecting-IP" # You can enable authn/authz headers which would be added to the response # of the `/auth/v1/oidc/forward_auth` endpoint. With `AUTH_HEADERS_ENABLE=true`, @@ -109,27 +151,27 @@ PEER_IP_HEADER_NAME="CF-Connecting-IP" # However, be careful when using this, since this kind of authn/authz has # a lot of pitfalls out of the scope of Rauthy. # default: false -AUTH_HEADERS_ENABLE=true +#AUTH_HEADERS_ENABLE=true # Configure the header names being used for the different values. # You can change them to your needs, if you cannot easily change your # downstream apps. # default: x-forwarded-user -AUTH_HEADER_USER=x-forwarded-user +#AUTH_HEADER_USER=x-forwarded-user # default: x-forwarded-user-roles -AUTH_HEADER_ROLES=x-forwarded-user-roles +#AUTH_HEADER_ROLES=x-forwarded-user-roles # default: x-forwarded-user-groups -AUTH_HEADER_GROUPS=x-forwarded-user-groups +#AUTH_HEADER_GROUPS=x-forwarded-user-groups # default: x-forwarded-user-email -AUTH_HEADER_EMAIL=x-forwarded-user-email +#AUTH_HEADER_EMAIL=x-forwarded-user-email # default: x-forwarded-user-email-verified -AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified +#AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified # default: x-forwarded-user-family-name -AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name +#AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name # default: x-forwarded-user-given-name -AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name +#AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name # default: x-forwarded-user-mfa -AUTH_HEADER_MFA=x-forwarded-user-mfa +#AUTH_HEADER_MFA=x-forwarded-user-mfa # You can set different security levels for Rauthy's cookies. # The safest option would be 'host', but may not be desirable when @@ -168,61 +210,39 @@ AUTH_HEADER_MFA=x-forwarded-user-mfa ############# BACKUPS ############### ##################################### -# Cron job for automatic data store backups (default: "0 0 4 * * * *") -# sec min hour day_of_month month day_of_week year -#BACKUP_TASK="0 0 4 * * * *" - -# The name for the data store backups. The current timestamp -# will always be appended automatically. (default: rauthy-backup-) -#BACKUP_NAME="rauthy-backup-" - -# All backups older than the specified hours will be cleaned up -# automatically (default: 720) -#BACKUP_RETENTION_LOCAL=720 - -# The following section will only be taken into account, when -# SQLite is used as the main database. If you use Postgres, you -# should use Postgres native tooling like for instance `pgbackrest` -# to manage your backups. -# If S3 access is configured, your SQLite backups will be encrypted -# and pushed into the configured bucket. -#S3_URL= -#S3_REGION= -#S3_PATH_STYLE=false -#S3_BUCKET=my_s3_bucket_name -#S3_ACCESS_KEY= -#S3_ACCESS_SECRET= -#S3_DANGER_ALLOW_INSECURE=false - -# Restores the given backup -# -# CAUTION: Be very careful with this option - it will overwrite -# any existing database! The best way to use this option is to -# provide it as an environment variable for a single start up -# and then remove it directly after success. -# -# This only works when you are using a SQLite database! -# If you are running on Postgres, you must use Postgres-native -# tooling to handle your backups. -# -# You can either restore a local backup, or an encrypted one -# from S3 storage. -# -# For restoring from a local backup, provide the folder name -# of the backup you want to restore. Local SQLite backups are -# always in `./data/backup/rauthy-backup-TIMESTAMP/` folders. -# You only provide the backup folder name itself, in this case -# it would be `rauthy-backup-TIMESTAMP` like this: -# RESTORE_BACKUP=file:rauthy-backup-TIMESTAMP -# -# If you want to restore an encrypted backup from S3 storage, -# you must provide the object name in the configured bucket. -# For instance, let's say we have an object named -# `rauthy-0.20.0-1703243039.cryptr` in our bucket, then the -# format would be: -# RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr -# -#RESTORE_BACKUP= +# When the auto-backup task should run. +# Accepts cron syntax: +# "sec min hour day_of_month month day_of_week year" +# default: "0 30 2 * * * *" +#HQL_BACKUP_CRON="0 30 2 * * * *" + +# Local backups older than the configured days will be cleaned up after +# the backup cron job. +# default: 30 +#HQL_BACKUP_KEEP_DAYS=30 + +# Backups older than the configured days will be cleaned up locally +# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`. +# default: 3 +#HQL_BACKUP_KEEP_DAYS_LOCAL=3 + +# If you ever need to restore from a backup, the process is simple. +# 1. Have the cluster shut down. This is probably the case anyway, if +# you need to restore from a backup. +# 2. Provide the backup file name on S3 storage with the +# HQL_BACKUP_RESTORE value. +# 3. Start up the cluster again. +# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE +# env value. +#HQL_BACKUP_RESTORE= + +# Access values for the S3 bucket where backups will be pushed to. +HQL_S3_URL=https://s3.example.com +HQL_S3_BUCKET=my_bucket +HQL_S3_REGION=example +HQL_S3_PATH_STYLE=true +HQL_S3_KEY=s3_key +HQL_S3_SECRET=s3_secret ##################################### ############ BOOTSTRAP ############## @@ -230,7 +250,7 @@ AUTH_HEADER_MFA=x-forwarded-user-mfa # If set, the email of the default admin will be changed # during the initialization of an empty production database. -#BOOTSTRAP_ADMIN_EMAIL=admin@localhost.de +BOOTSTRAP_ADMIN_EMAIL=admin@localhost.de # If set, this plain text password will be used for the # initial admin password instead of generating a random @@ -298,7 +318,7 @@ AUTH_HEADER_MFA=x-forwarded-user-mfa #BOOTSTRAP_API_KEY_SECRET=twUA2M7RZ8H3FyJHbti2AcMADPDCxDqUKbvi8FDnm3nYidwQx57Wfv6iaVTQynMh ##################################### -############## CACHE ################ +############# CLUSTER ############### ##################################### # Can be set to 'k8s' to try to split off the node id from the hostname @@ -319,17 +339,20 @@ HQL_NODE_ID=1 # id addr_raft addr_api # id addr_raft addr_api # -# 2 nodes must be separated by 2 `\n` HQL_NODES=" 1 localhost:8100 localhost:8200 " -# If set to `true`, all SQL statements will be logged for debugging -# purposes. -# default: false -HQL_LOG_STATEMENTS=true +# Sets the limit when the Raft will trigger the creation of a new +# state machine snapshot and purge all logs that are included in +# the snapshot. +# Higher values can achieve more throughput in very write heavy +# situations but will end up in more disk usage and longer +# snapshot creations / log purges. +# default: 10000 +#HQL_LOGS_UNTIL_SNAPSHOT=10000 -# Secrets for Raft internal authentication as well as for the Hiqlite API. +# Secrets for Raft internal authentication as well as for the API. # These must be at least 16 characters long and you should provide # different ones for both variables. HQL_SECRET_RAFT=SuperSecureSecret1337 @@ -345,40 +368,97 @@ HQL_SECRET_API=SuperSecureSecret1337 ############ DATABASE ############### ##################################### -# The database driver will be chosen at runtime depending on -# the given DATABASE_URL format. Examples: -# Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:' -# Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName' +# Connection string to connect to a Postgres database. +# This will be ignored as long as `HIQLITE=true`. # -# NOTE: The password in this case should be alphanumeric. Special -# characters could cause problems in the connection string. +# Format: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName' # -# CAUTION: -# To make the automatic migrations work with Postgres15, when -# you do not want to just use the `postgres` user, You need -# to have a user with the same name as the DB / schema. For -# instance, the following would work without granting extra -# access to the `public` schema which is disabled by default -# since PG15: +# NOTE: The password in this case should be alphanumeric. +# Special characters could cause problems in the connection string. # +# CAUTION: To make the automatic migrations work with Postgres 15+, +# when you do not want to just use the `postgres` user, You need +# to have a user with the same name as the DB / schema. For instance, +# the following would work without granting extra access to the +# `public` schema which is disabled by default since PG15: # database: rauthy # user: rauthy # schema: rauthy with owner rauthy # -#DATABASE_URL=sqlite::memory: -#DATABASE_URL=sqlite:data/rauthy.db #DATABASE_URL=postgresql://rauthy:123SuperSafe@localhost:5432/rauthy -# Max DB connections - irrelevant for SQLite (default: 5) -#DATABASE_MAX_CONN=5 +# Max DB connections for the Postgres pool. +# Irrelevant for Hiqlite. +# default: 20 +#DATABASE_MAX_CONN=20 -# If specified, the current Database, set with DATABASE_URL, -# will be DELETED and OVERWRITTEN with a migration from the -# given database with this variable. Can be used to migrate -# between different databases. -# +# If specified, the currently configured Database will be +# DELETED and OVERWRITTEN with a migration from the given +# database with this variable. Can be used to migrate between +# different databases. +# To migrate from Hiqlite, use the `sqlite:` prefix. +# # !!! USE WITH CARE !!! -#MIGRATE_DB_FROM=sqlite:data/rauthy.db +# +#MIGRATE_DB_FROM=sqlite:data/state_machine/db/hiqlite.db +#MIGRATE_DB_FROM=postgresql://postgres:123SuperSafe@localhost:5432/rauthy + +# Hiqlite is the default database for Rauthy. +# You can opt-out and use Postgres instead by setting the proper +# `DATABASE_URL=postgresql://...` by setting `HIQLITE=false` +# default: true +#HIQLITE=true + +# The data dir hiqlite will store raft logs and state machine data in. +# default: data +#HQL_DATA_DIR=data + +# The file name of the SQLite database in the state machine folder. +# default: hiqlite.db +#HQL_FILENAME_DB=hiqlite.db + +# If set to `true`, all SQL statements will be logged for debugging +# purposes. +# default: false +#HQL_LOG_STATEMENTS=false + +# The size of the pooled connections for local database reads. +# +# Do not confuse this with a pool size for network databases, as it +# is much more efficient. You can't really translate between them, +# because it depends on many things, but assuming a factor of 10 is +# a good start. This means, if you needed a (read) pool size of 40 +# connections for something like a postgres before, you should start +# at a `read_pool_size` of 4. +# +# Keep in mind that this pool is only used for reads and writes will +# travel through the Raft and have their own dedicated connection. +# +# default: 4 +#HQL_READ_POOL_SIZE=4 + +# Enables immediate flush + sync to disk after each Log Store Batch. +# The situations where you would need this are very rare, and you +# should use it with care. +# +# The default is `false`, and a flush + sync will be done in 200ms +# intervals. Even if the application should crash, the OS will take +# care of flushing left-over buffers to disk and no data will get +# lost. Only if something worse happens, you might lose the last +# 200ms of commits. +# +# The only situation where you might want to enable this option is +# when you are on a host that might lose power out of nowhere, and +# it has no backup battery, or when your OS / disk itself is unstable. +# +# `sync_immediate` will greatly reduce the write throughput and put +# a lot more pressure on the disk. If you have lots of writes, it +# can pretty quickly kill your SSD for instance. +#HQL_SYNC_IMMEDIATE=false + +# The password for the Hiqlite dashboard as Argon2ID hash. +# '123SuperMegaSafe' in this example +#HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0zMix0PTIscD0xJE9FbFZURnAwU0V0bFJ6ZFBlSEZDT0EkTklCN0txTy8vanB4WFE5bUdCaVM2SlhraEpwaWVYOFRUNW5qdG9wcXkzQQ== # Defines the time in seconds after which the `/health` endpoint # includes HA quorum checks. The initial delay solves problems @@ -416,12 +496,12 @@ HQL_SECRET_API=SuperSecureSecret1337 # Grant flow. You may increase the default of 300 seconds, if you have # "slow users" and they are simply not fast enough with the verification. # default: 300 -DEVICE_GRANT_CODE_LIFETIME=300 +#DEVICE_GRANT_CODE_LIFETIME=300 # The length of the `user_code` the user has to enter manually for # auth request validation. This must be < 64 characters. # default: 8 -DEVICE_GRANT_USER_CODE_LENGTH=8 +#DEVICE_GRANT_USER_CODE_LENGTH=8 # Specifies the rate-limit in seconds per IP for starting new Device # Authorization Grant flows. This is especially important for public @@ -431,19 +511,19 @@ DEVICE_GRANT_USER_CODE_LENGTH=8 # If you use the `device_code` grant with confidential clients only, # you can leave this unset, which will not rate-limit the endpoint. # default: not set -DEVICE_GRANT_RATE_LIMIT=1 +#DEVICE_GRANT_RATE_LIMIT=1 # The interval in seconds which devices are told to use when they # poll the token endpoint during Device Authorization Grant flow. # default: 5 -DEVICE_GRANT_POLL_INTERVAL=5 +#DEVICE_GRANT_POLL_INTERVAL=5 # You can define a global lifetime in hours for refresh tokens issued # from a Device Authorization Grant flow. You might want to have a # higher lifetime than normal refresh tokens, because they might be # used in IoT devices which may be offline for longer periods of time. # default: 72 -DEVICE_GRANT_REFRESH_TOKEN_LIFETIME=72 +#DEVICE_GRANT_REFRESH_TOKEN_LIFETIME=72 ##################################### ############## DPOP ################# @@ -452,14 +532,14 @@ DEVICE_GRANT_REFRESH_TOKEN_LIFETIME=72 # May be set to 'false' to disable forcing the usage of # DPoP nonce's. # default: true -DPOP_FORCE_NONCE=true +#DPOP_FORCE_NONCE=true # Lifetime in seconds for DPoP nonces. These are used to # limit the lifetime of a client's DPoP proof. Do not set # lower than 30 seconds to avoid too many failed client # token requests. # default: 900 -DPOP_NONCE_EXP=900 +#DPOP_NONCE_EXP=900 ##################################### ########## DYNAMIC CLIENTS ########## @@ -628,7 +708,7 @@ ENC_KEY_ACTIVE=bVCyTsGaggVy5yqQ # https://sebadob.github.io/rauthy/config/argon2.html # M_COST should never be below 32768 in production ARGON2_M_COST=131072 -# T_COST should never be below 1 in production +# T_COST must be greater than 0 ARGON2_T_COST=4 # P_COST should never be below 2 in production ARGON2_P_COST=8 @@ -919,7 +999,6 @@ EVENT_LEVEL_FAILED_LOGIN=info #LOG_LEVEL=info # The log level for the `Hiqlite` persistence layer. -# At the time of writing, only the cache will use `hiqlite` # default: info LOG_LEVEL_DATABASE=info @@ -1113,7 +1192,7 @@ PROXY_MODE=false ############### TLS ################# ##################################### -## Rauthy TLS +## UI + API TLS # Overwrite the path to the TLS certificate file in PEM # format for rauthy (default: tls/tls.crt) @@ -1124,7 +1203,7 @@ PROXY_MODE=false # (default: tls/tls.key) #TLS_KEY=tls/tls.key -## CACHE TLS +## Database / Cache internal TLS # If given, these keys / certificates will be used to establish # TLS connections between nodes. diff --git a/book/src/config/db_migration.md b/book/src/config/db_migration.md index 37f3bc842..afaae3785 100644 --- a/book/src/config/db_migration.md +++ b/book/src/config/db_migration.md @@ -1,9 +1,10 @@ # Database Migrations -You can migrate easily between SQLite and Postgres, or just between different instances of them. +You can migrate easily between Hiqlite and Postgres, or just between different instances of the same database. -Let's say you started out by evaluating Rauthy with a SQLite and a single instance deployment. Later on, you want to -migrate to a HA setup, which requires you to use a Postgres. +Let's say you started out by evaluating Rauthy with the default Hiqlite and a single instance deployment. Later on, you +want to migrate to Postgres for whatever reason. Or you started with Postgres and yuo want to reduce your memory +footprint by switching to Hiqlite. All of this is easily possible. **Solution:** `MIGRATE_DB_FROM` @@ -12,10 +13,10 @@ The way it works is the following: 1. At startup, have a look if `MIGRATE_DB_FROM` is configured 2. If yes, then connect to the given database -3. At the same time, connect to the database specified in the `DATABASE_URL` -4. Overwrite all existing data in `DATABASE_URL` with the data from the `MIGRATE_DB_FROM` database +3. At the same time, connect to the database specified via `HIQLITE` and `DATABASE_URL` +4. Overwrite all existing data in the target database with the data from the `MIGRATE_DB_FROM` source 5. Close the connection to `MIGRATE_DB_FROM` -6. Use the `DATABASE_URL` as the new database and start normal operation +6. Start normal operation ```admonish danger `MIGRATE_DB_FROM` overwrites any data in the target database! Be very careful with this option. @@ -25,8 +26,8 @@ the next restart of the application. Remove the config variable immediately afte ``` ```admonish info -**v0.14 and beyond:** if you want to migrate to a different database, for instance from SQLite to Postgres, you need to -switch to the correct rauthy image as well. Rauthy v0.14 and beyond has different container images for the databases. +**any version below 0.27.0:** if you want to migrate to a different database, for instance from SQLite to Postgres, you need to +switch to the correct Rauthy image as well. ``` ```admonish hint diff --git a/book/src/config/ha.md b/book/src/config/ha.md index 39df001c1..0dad28b88 100644 --- a/book/src/config/ha.md +++ b/book/src/config/ha.md @@ -2,13 +2,11 @@ Rauthy is capable of running in a High Availability Mode (HA). -Some values, like authentication codes for instance, do live in the cache only. Additionally, there might come an -option with a future version which offers a special in-memory only mode in some situations. - -Because of this, all instances create and share a single HA cache layer, which means at the same time, that you cannot -just scale up the replicas infinitely without adjusting the config. The optimal amount of replicas for a HA mode would -be 3, or if you need even higher resilience 5. More replicas should work just fine, but this has never been really -tested and the performance will degrade at some point. +Some values, like authentication codes for instance, do live in the cache only. Because of this, all instances create +and share a single HA cache layer, which means at the same time, that you cannot just scale up the replicas infinitely +without adjusting the config. The optimal amount of replicas for a HA mode would be 3, or if you need even higher +resilience 5. More replicas should work just fine, but this has never been really tested and the latency will +increase at some point. The Cache layer uses another project of mine called [Hiqlite](https://github.com/sebadob/hiqlite). It uses the Raft algorithm under the hood to achieve consistency. @@ -16,20 +14,20 @@ algorithm under the hood to achieve consistency. ```admonish caution Even though everything is authenticated, you should not expose the Hiqlite ports to the public, if not really necessary for some reason. You configure these ports with the `HQL_NODES` -config value in the `CACHE` section. +config value in the `CLUSTER` section. ``` ## Configuration Earlier versions of Rauthy have been using [redhac](https://github.com/sebadob/redhac) for the HA cache layer. While -`redhac` was working fine, it had a few design issues I wanted to get rid of. Since `v0.26.0`, Rauthy uses the above -mentioned [Hiqlite](https://github.com/sebadob/hiqlite) instead. You only need to configure a few variables: +`redhac` was working fine, it had a few design issues I wanted to get rid of. Since `v0.26.0`, Rauthy uses the +above-mentioned [Hiqlite](https://github.com/sebadob/hiqlite) instead. You only need to configure a few variables: ### `HQL_NODE_ID` The `HQL_NODE_ID` is mandatory, even for a single replica deployment with only a single node in `HQL_NODES`. If you deploy Rauthy as a StatefulSet inside Kubernetes, you can ignore this value and just set `HQL_NODE_ID_FROM` -below. If you deploy anywere else or you are not using a StatefulSet, you need to set the `HQL_NODE_ID` to tell Rauthy +below. If you deploy anywhere else, or you are not using a StatefulSet, you need to set the `HQL_NODE_ID` to tell Rauthy which node of the Raft cluster it should be. ``` @@ -52,8 +50,8 @@ This will parse the correct NodeID from the Pod hostname, so you don't have to w ### `HQL_NODES` -Using this value, you defined the Cache / Raft members. This must be given even if you just deploy a single instance. -The description from the reference config should be clear enough: +This value defines the Cache / Raft members. It must be given even if you just deploy a single instance. The description +from the reference config should be clear enough: ``` # All cluster member nodes. diff --git a/book/src/config/logging.md b/book/src/config/logging.md index b47092b78..11cd25026 100644 --- a/book/src/config/logging.md +++ b/book/src/config/logging.md @@ -22,6 +22,21 @@ for logging information from different function runs or things that have been tr LOG_LEVEL=info ``` +### `LOG_LEVEL_DATABASE` + +The Hiqlite database logging is at the time of writing pretty verbose on purpose. The whole persistence layer with the +Raft cluster setup has been written from the ground up. The amount of logging will be reduced in later versions, when +the whole layer has been proven to be really solid, but for now you get more information just in case you need to debug +something. + +You can reduce the default logging and for instance set it to `warn` or `error` only. + +``` +# The log level for the `Hiqlite` persistence layer. +# default: info +LOG_LEVEL_DATABASE=info +``` + ### `LOG_LEVEL_ACCESS` For changing the logging behavior for access logs to the API endpoints, you will need to set the `LOG_LEVEL_ACCESS`. @@ -49,6 +64,22 @@ to reduce duplicated log outputs. LOG_LEVEL_ACCESS=Basic ``` +### `LOG_FMT` + +Rauthy can output logs as JSON data with the following variable: + +``` +# You can change the log output format to JSON, if you set: +# `LOG_FMT=json`. +# Keep in mind, that some logs will include escaped values, +# for instance when `Text` already logs a JSON in debug level. +# Some other logs like an Event for instance will be formatted +# as Text anyway. If you need to auto-parse events, please consider +# using an API token and listen ot them actively. +# default: text +#LOG_FMT=text +``` + ## Events Events are used for auditing and never miss anything. If something important happens, you usually need to inspect logs @@ -92,13 +123,16 @@ it should post the events. # you should provide `EVENT_MATRIX_ACCESS_TOKEN`. # If both are given, the `EVENT_MATRIX_ACCESS_TOKEN` will be preferred. # -# If left empty, no messages will be sent to Matrix. +# If left empty, no messages will not be sent to Matrix. # Format: `@:` #EVENT_MATRIX_USER_ID= # Format: `!:` #EVENT_MATRIX_ROOM_ID= #EVENT_MATRIX_ACCESS_TOKEN= #EVENT_MATRIX_USER_PASSWORD= +# URL of your Matrix server. +# default: https://matrix.org +#EVENT_MATRIX_SERVER_URL=https://matrix.org # Optional path to a PEM Root CA certificate file for the Matrix client. #EVENT_MATRIX_ROOT_CA_PATH=path/to/my/root_ca_cert.pem # May be set to disable the TLS validation for the Matrix client. diff --git a/book/src/config/tls.md b/book/src/config/tls.md index 97db2fa47..476f87bcb 100644 --- a/book/src/config/tls.md +++ b/book/src/config/tls.md @@ -75,10 +75,11 @@ The [reference config](../config/config.html) contains a `TLS` section with all For this example, we will be using the same certificates for both the internal cache mTLS connections and the public facing HTTPS server. -### Cache +### Hiqlite -The cache layer (optionally) uses TLS, if you provide certificates. Simply provide the following values from the `TLS` -section in the reference config: +Hiqlite can run the whole database layer, and it will always take care of caching. It can be configured to use TLS +internally, if you provide certificates. Simply provide the following values from the `TLS` section in the reference +config: ``` # If given, these keys / certificates will be used to establish diff --git a/book/src/config/user_reg.md b/book/src/config/user_reg.md index 7628a2e9c..837325699 100644 --- a/book/src/config/user_reg.md +++ b/book/src/config/user_reg.md @@ -132,10 +132,27 @@ The following things will happen: This makes it possible to use Rauthy as your upstream provider without the user really needing to interact with or know about it in detail, which again leads to less confusion. -```admonish note -If you want to complete this improved UX setup, you should set a **Client URI** for the client in the admin dashboard. -When there is a valid value, a small home icon will be shown inside the login form, so a user can get back to the -client's URI without possibly screwing up with incorrectly using the browsers back button. +By default, the allowed `redirect_uri`s are restricted to all existing `client_uri`s in the database. They will be +compared via `client_uri.startsWith(redirect_uri)`. If you want to opt-out of the additional redirect_uri checks and +configure and open redirect to allow just anything, you can do so: + +``` +# If set to `true`, any validation of the `redirect_uri` provided during +# a user registration will be disabled. +# Clients can use this feature to redirect the user back to their application +# after a successful registration, so instead of ending up in the user +# dashboard, they come back to the client app that initiated the registration. +# +# The given `redirect_uri` will be compared against all registered +# `client_uri`s and will throw an error, if there is no match. However, +# this check will prevent ephemeral clients from using this feature. Only +# if you need it in combination with ephemeral clients, you should +# set this option to `true`. Otherwise it is advised to set the correct +# Client URI in the admin UI. The `redirect_uri` will be allowed if it starts +# with any registered `client_uri`. +# +# default: false +#USER_REG_OPEN_REDIRECT=true ``` ### Custom Frontend diff --git a/book/src/getting_started/docker.md b/book/src/getting_started/docker.md index 129de8e64..3c134f619 100644 --- a/book/src/getting_started/docker.md +++ b/book/src/getting_started/docker.md @@ -3,9 +3,9 @@ ## Testing / Local Evaluation For getting a first look at Rauthy, you can start it with docker (or any other container runtime) on your localhost. -The image contains a basic default config which is sufficient for local testing. Rauthy has pretty strict cookie -settings and not all browsers treat `localhost` as being secure, therefore you should allow insecure cookies for -testing locally: +The image contains a basic default config which is sufficient for local testing (please don't use it in production). +Rauthy has pretty strict cookie settings and not all browsers treat `localhost` as being secure, therefore you should +allow insecure cookies for testing locally: ``` docker run --rm \ @@ -15,15 +15,14 @@ docker run --rm \ ghcr.io/sebadob/rauthy:0.26.2-lite ``` -This will start the container in interactive mode with an in-memory SQLite database. Just take a look at the log at the -logs to see the URL and first password. +This will start the container in interactive mode with the [Hiqlite](https://github.com/sebadob/hiqlite) database. Just +take a look at the log at the logs to see the Account Dashboard URL and new admin password, to get access. -If you want to test a bit more in depth, you can change to an on-disk database easily: +If you want to test a bit more in depth, you can keep the container between restarts: ``` docker run -d \ -e COOKIE_MODE=danger-insecure \ - -e DATABASE_URL=sqlite:data/rauthy.db \ -p 8080:8080 \ --name rauthy \ ghcr.io/sebadob/rauthy:0.26.2-lite @@ -41,39 +40,44 @@ To see the logs and the new admin password, take a look with docker logs -f rauthy ``` -To delete the container, if you do not need it anymore, execute +To delete the container, if you don't need it anymore, execute ``` docker stop rauthy && docker rm rauthy ``` -To proceed, go to **[First Start](first_start.md)** +To proceed, go to **[First Start](first_start.md)**, or do the production setup below to have persistence. ## Production Setup For going to production or to test more in-depth, you need to apply a config that matches your environment. -The first thing you might want to do is to add a volume mount for the database. -The second thing is to provide a more custom config. +The first thing you want to do is to add a volume mount for the database. The second thing is to provide a more +appropriate config. -Rauthy can either be configured via environment variables only, or you can provide a config file. -You can add environment variables to the startup command with the `-e` option, like shown in the on-disk SQLite -command. -A better approach, when you have a bigger config file, would be to have all of them in a config file. +Rauthy can either be configured via environment variables only, or you can provide a config file. It parses both, the +config first and any env var that is set will overwrite a possibly existing one from the config. You can add environment +variables to the startup command with the `-e` option. + +```admonish caution +Even if you don't want to use any config file at all, you need to make sure to set safe defaults. Usually you should +always create a least an empty config just to overwrite the test-config inside the container, because it sets some +variables that should only be used for testing. +``` ```admonish note -The following commands will work on Linux and Mac OS (even though not tested on Mac OS). If you are on Windows, -you might replace the `pwd` command and just paste in the path directly. Since I am no Windows user myself, I -cannot provide tested commands in this case. +The following commands will work on Linux and Mac OS (even though not tested on Mac OS). Since I am no Windows user +myself, I cannot provide tested commands in this case. ``` -**1. We want to create a new directory for rauthy's persistent data** +**1. We want to create a new directory for Rauthy's persistent data** ``` -mkdir rauthy +mkdir -p rauthy/data ``` -**2. Add the new config file.** +**2. Add the new config file.** + This documentation is in an early version and remote links are not available yet, they will be added at a later point. For now, create a new file and paste the [reference config](../config/config.html) @@ -81,20 +85,16 @@ point. For now, create a new file and paste the [reference config](../config/con vim rauthy/rauthy.cfg ``` -**3. Create a sub-directory for the Database files** +**3. Access rights for the Database files** -``` -mkdir rauthy/data -``` - -The rauthy container by default runs everything with user:group 10001:10001 for security reasons. +The Rauthy container by default runs everything with user:group `10001:10001` for security reasons. To make this work with the default values, you have 2 options: - Change the access rights: ``` -chmod 0640 rauthy/rauthy.cfg -chmod 0700 -R rauthy/data +sudo chmod 0600 rauthy/rauthy.cfg +sudo chmod 0700 rauthy/data sudo chown -R 10001:10001 rauthy ``` @@ -104,17 +104,25 @@ sudo chown -R 10001:10001 rauthy chmod a+w rauthy/data ``` -This will make the directory writeable for everyone, so rauthy can create the database files inside the container -with 10001:10001 again. +This will make the directory writeable for everyone, so Rauthy can create the database files inside the container +with `10001:10001` again. -```admonish note +```admonish caution The safest approach would be to change the owner and group for these files on the host system. This needs `sudo` to edit the config, which may be a bit annoying, but at the same time it makes sure, that you can only read the secrets inside it with `sudo` too. + +You should avoid making Rauthy's data world-accessible at all cost. [Hiqlite](https://github.com/sebadob/hiqlite) will +take care of this automatically during sub-directory creation, but the config includes sensitive information. ``` -**4. Adopt the config to your liking.** -Make sure to adjust the volume mount for the sqlite directory in step 5, if it differs from `sqlite:data/rauthy.db` +**4. Adopt the config to your liking.** + +Take a look at the [reference config](../config/config.html) and adopt everything to your needs, but to not break this +example, be sure to not change `HQL_DATA_DIR`. + +For an in-depth guide on a production ready config, check the [Production Config](../config/production_config.md) +section. **5. Start the container with volume mounts** @@ -127,13 +135,9 @@ docker run -d \ ghcr.io/sebadob/rauthy:0.26.2-lite ``` -**6. Restrict DB files access even more** -After rauthy has done the first start, you could harden the access rights of the SQLite files even more. -This would make sure, that no one without `sudo` could just copy and read in the SQLite in some other place. -Just execute once more: - -``` -sudo chmod 0700 -R rauthy/data -``` +- `-v $(pwd)/rauthy/rauthy.cfg:/app/rauthy.cfg` makes sure to overwrite the testing config inside the container +- `-v $(pwd)/rauthy/data:/app/data` mounts the volume for Hiqlite +- `-p 8080:8080` needs to match your configured `LISTEN_PORT_HTTP` or `LISTEN_PORT_HTTPS` of course. If you use a + reverse proxy inside a docker network, you don't need to expose any port. -**7. You can now proceed with the [First Start](first_start.md) steps.** +**6. You can now proceed with the [First Start](first_start.md) steps.** diff --git a/book/src/getting_started/k8s.md b/book/src/getting_started/k8s.md index 7b863d5c8..42327d3d9 100644 --- a/book/src/getting_started/k8s.md +++ b/book/src/getting_started/k8s.md @@ -52,8 +52,6 @@ Make sure to watch the indentation. ```admonish caution Do not include sensitive information like for instance the ENC_KEYS inside the normal Config. Use the secrets from the next step for this. -If you use SQLite, you can include the DATABASE_URL in the config, since it does not contain a password, but -never do this for Postgres! ``` ```admonish note @@ -78,19 +76,8 @@ metadata: namespace: rauthy type: Opaque stringData: - # Secret token, which is used to authenticate the cache members. - # Only necessary when `HA_MODE=true` - #CACHE_AUTH_TOKEN= - - # The database driver will be chosen at runtime depending on - # the given DATABASE_URL format. Examples: - # Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:' - # Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName' - # - # NOTE: The password in this case should be alphanumeric. - # Special characters could cause problems in the connection - # string. - DATABASE_URL: + HQL_S3_KEY: + HQL_S3_SECRET: # Secrets for Raft internal authentication as well as for the Hiqlite API. # These must be at least 16 characters long and you should provide @@ -98,6 +85,32 @@ stringData: HQL_SECRET_RAFT: HQL_SECRET_API: + # The password for the Hiqlite dashboard as Argon2ID hash. + # '123SuperMegaSafe' in this example + # + # You should only provide it, if you really need to access the DB + # directly for some reasons. + #HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0zMix0PTIscD0xJE9FbFZURnAwU0V0bFJ6ZFBlSEZDT0EkTklCN0txTy8vanB4WFE5bUdCaVM2SlhraEpwaWVYOFRUNW5qdG9wcXkzQQ== + + # Connection string to connect to a Postgres database. + # This will be ignored as long as `HIQLITE=true`. + # + # Format: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName' + # + # NOTE: The password in this case should be alphanumeric. + # Special characters could cause problems in the connection string. + # + # CAUTION: To make the automatic migrations work with Postgres 15+, + # when you do not want to just use the `postgres` user, You need + # to have a user with the same name as the DB / schema. For instance, + # the following would work without granting extra access to the + # `public` schema which is disabled by default since PG15: + # database: rauthy + # user: rauthy + # schema: rauthy with owner rauthy + # + #DATABASE_URL: postgresql://rauthy:123SuperSafe@localhost:5432/rauthy + # You need to define at least one valid encryption key. # These keys are used in various places, like for instance # encrypting confidential client secrets in the database, or @@ -108,6 +121,7 @@ stringData: # q6u26vXV/M0NFQzhSSldCY01rckJNa1JYZ3g2NUFtSnNOVGdoU0E= # bVCyaggQ/UzluN29DZW41M3hTSkx6Y3NtZmRuQkR2TnJxUTYzcjQ= ENC_KEYS: |- + # This identifies the key ID from the `ENC_KEYS` list, that # should actively be used for new encryption's. ENC_KEY_ACTIVE: @@ -141,9 +155,6 @@ stringData: All variables specified here should be out-commented in the `rauthy-config` from above. Make sure that things like `CACHE_AUTH_TOKEN` and `ENC_KEYS` are generated in a secure random way. -The `DATABASE_URL` with SQLite, like used in this example, does not contain sensitive information, but we will -create it as a secret anyway to have an easier optional migration to postgres later on. - Generate a new encryption key with ID in the correct format. ``` @@ -164,7 +175,7 @@ The `ENC_KEY_ID` would be d4d1a581 ``` -You can generate safe values for both Β΄HQL_SECRET_RAFT` and `HQL_SECRET_API` in many ways. You can just provide a random +You can generate safe values for both `HQL_SECRET_RAFT` and `HQL_SECRET_API` in many ways. You can just provide a random alphanumeric value, which for instance: ``` @@ -177,6 +188,9 @@ or you can use the above `openssl` command again, even though Hiqlite does not n openssl rand -base64 48 ``` +If you plan on using S3 for backups, paste the proper values into `HQL_S3_KEY` and `HQL_S3_SECRET`, otherwise +out-comment them. + ### Create and apply the stateful set ``` @@ -195,11 +209,15 @@ spec: selector: app: rauthy ports: - # If you use the HA feature later on, the port over which the cache layer does - # communicate. - - name: cache - port: 8000 - targetPort: 8000 + # Ports 8100 and 8200 (by default) are used for the Hiqlite internal communication. + - name: hiqlite-raft + protocol: TCP + port: 8100 + targetPort: 8100 + - name: hiqlite-api + protocol: TCP + port: 8200 + targetPort: 8200 # Assuming that this example file will run behind a Kubernetes ingress and does # use HTTP internally. - name: http @@ -242,18 +260,14 @@ spec: runAsGroup: 10001 allowPrivilegeEscalation: false ports: - - containerPort: 8000 + # Hiqlite internal ports + - containerPort: 8100 + - containerPort: 8200 # You may need to adjust this, if you decide to start in https only mode # or use another port - containerPort: 8080 - containerPort: 8443 env: - - name: DATABASE_URL - valueFrom: - secretKeyRef: - name: rauthy-secrets - key: DATABASE_URL - # You must set both Hiqlite secrets even for a single node deployment - name: HQL_SECRET_RAFT valueFrom: @@ -266,6 +280,25 @@ spec: name: rauthy-secrets key: HQL_SECRET_API + # If you don't want to use S3 for backups, out-comment these 2. + - name: HQL_S3_KEY + valueFrom: + secretKeyRef: + name: rauthy-secrets + key: HQL_S3_KEY + - name: HQL_S3_SECRET + valueFrom: + secretKeyRef: + name: rauthy-secrets + key: HQL_S3_SECRET + + # Only out-comment if you use Postgres + #- name: DATABASE_URL + # valueFrom: + # secretKeyRef: + # name: rauthy-secrets + # key: DATABASE_URL + # Encryption keys used for encryption in many places - name: ENC_KEYS valueFrom: @@ -350,18 +383,18 @@ spec: # depends on your use case. cpu: 100m limits: - # Be careful with the memory limit. You must make sure, that the - # (very costly) password hashing has enough memory available. If not, - # the application will crash. You do not really need a memory limit, - # since Rust is not a garbage collected language. Better take a close - # look at what the container actually needs during - # prime time and set the requested resources above properly. - #memory: - # A CPU limit may make sense in case of DDoS attacks or something - # like this, if you do not have external rate limiting or other - # mechanisms. Otherwise, `MAX_HASH_THREADS` is the main mechanism - # to limit resources. - cpu: 1000m + # Be careful with the memory limit. You must make sure, that the + # (very costly) password hashing has enough memory available. If not, + # the application will crash. You do not really need a memory limit, + # since Rust is not a garbage collected language. Better take a close + # look at what the container actually needs during + # prime time and set the requested resources above properly. + #memory: + # A CPU limit may make sense in case of DDoS attacks or something + # like this, if you do not have external rate limiting or other + # mechanisms. Otherwise, `MAX_HASH_THREADS` is the main mechanism + # to limit resources. + #cpu: 1000m volumes: - name: rauthy-config configMap: @@ -492,6 +525,29 @@ spec: port: 8080 ``` +#### Hiqlite Internal TLS + +You can of course also provide TLS certificates for the Hiqlite internal communication. Two Independent networks are +created: one for the Raft-Internal network traffic like heartbeats and data replication, and a second one for the +"external" Hiqlite API. This is used by other Hiqlite cluster members for management purposes and to execute things +like consistent queries on the leader node. + +You can provide TLS certificates for both of them independently via the following config variables: + +``` +## Hiqlite TLS + +# If given, these keys / certificates will be used to establish +# TLS connections between nodes. +HQL_TLS_RAFT_KEY=tls/key.pem +HQL_TLS_RAFT_CERT=tls/cert-chain.pem +HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=true + +HQL_TLS_API_KEY=tls/key.pem +HQL_TLS_API_CERT=tls/cert-chain.pem +HQL_TLS_API_DANGER_TLS_NO_VERIFY=true +``` + #### Additional steps There are a few more things to do when going into production, but these are the same for Kubernetes and Docker and will diff --git a/book/src/getting_started/main.md b/book/src/getting_started/main.md index 356311ce8..9878a801f 100644 --- a/book/src/getting_started/main.md +++ b/book/src/getting_started/main.md @@ -2,32 +2,26 @@ ## Choose A Database -You only need to answer a single question to decide, which database you should use: +Rauthy's default database is [Hiqlite](https://github.com/sebadob/hiqlite). Under the hood, it's using SQLite, but it +adds an additional layer on top making it highly-available using +the [Raft Consensus Algorithm](https://raft.github.io/). Don't let the SQLite engine under the hood fool you, it will +handle most probably anything you throw at it, as long as your disks are fast enough. Hiqlite can easily saturate a +1GBit/s network connection with just database (write) traffic. All reads are local, which means they are way faster than +with Postgres in any scenario. -**Do you want / need a HA deployment?** - -If the answer is **Yes**, choose **Postgres**, **otherwise** choose **SQLite**. - -SQLite is no performance bottleneck at all. After some first very rough tests, it does not have problems with even -millions of users. The bottleneck will always be the password hashing algorithm settings, your needs for how secure -it should be and how many concurrent logins you want to be able to handle (more on that later). +If you already have a Postgres up an running with everything set up anyway, you might want to choose it as your main DB, +but I do not recommend deploying a Postgres instance just for Rauthy. This would be a waste of precious resources. ```admonish hint -If you want to migrate from Postgres to SQLite at a later point, you can do this at any time very easily. +If you want to migrate between databases at a later point, you can do this at any time very easily. Just take a look at the [Reference Config](../config/config.html) and the variable `MIGRATE_DB_FROM`. ``` ## Container Images -Rauthy comes with different container images. The difference between them is not only x86 vs arm64, but the database -driver under the hood as well. The reason is, that almost all SQL queries are checked at compile time. To make this -possible, different images need to be created. Apart from the database driver, there is no difference between them. -You also can't use the "wrong" image by accident. If you try to use a Postgres image with a SQLite database URL and -vice versa, Rauthy will yell at you at startup and panic on purpose. - -- The "normal" container images can be used for Postgres -- The `*-lite` images use an embedded SQLite -- The `MIGRATE_DB_FROM` (explained later) can be used with any combination of image / database +Rauthy versions before `0.27.0` had different container images depending on the database you choose. However, this is +not the case anymore. There is only a single image which works with any configuration. It works on `x86_64` and `arm64` +architectures. At the time of writing, you can run Rauthy either with [Docker](./docker.md) or inside [Kubernetes](./k8s.md). Both *Getting Started* guides do not cover all set up you might want to do for going into production. Especially the diff --git a/book/src/intro.md b/book/src/intro.md index 87ac1af8a..4a13d693c 100644 --- a/book/src/intro.md +++ b/book/src/intro.md @@ -20,7 +20,7 @@ Rauthy is an OpenID Connect (OIDC) Provider and Single Sign-On solution written It tries to be as secure as possible by default while still providing all the options needed to be compatible with older systems. For instance, if you create a new OIDC client, it activates `ed25519` as the default algorithm for -token signing and S256 PKCE flow. This will not work with old clients, which do not support it, but you can of course +token signing and S256 PKCE flow. This will not work with clients, which do not support it, but you can of course deactivate this to your liking. ### MFA and Passwordless Login @@ -28,8 +28,7 @@ deactivate this to your liking. **Option 1:** Password + Security Key (without User Verification): Rauthy provides FIDO 2 / Webauthn login flows. If you once logged in on a new client with your username + password, -you -will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to +you will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to have a FIDO compliant Passkey being registered for your account. **Option 2:** @@ -39,24 +38,28 @@ your FIDO 2 Passkey. Your account will not even have / need a password. This log those passkeys, that can provide User Verification (UV) to always have at least 2FA security. ```admonish note -Discoverable credentials are discouraged with Rauthy. This means you will need to enter your E-Mail for the login -(which will be auto-filled after the first one), but Rauthy passkeys do not use any storage on your device. For instance -when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even having full support. +Discoverable credentials are discouraged with Rauthy (for good reason). This means you will need to enter your E-Mail +for the login (which will be autofilled after the first one), but Rauthy passkeys do not use any storage on your +device. For instance when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even +having full support. ``` ### Fast and efficient -The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint -and being more efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a -Raspberry Pi. It makes extensive use of caching to be as fast as possible in cases where your database is further -away or just a bit slower, because it is maybe running on an SBC from an SD card or in the cloud with the lowest storage -bandwidth. Most things are even cached for several hours and special care has been taken into account in case of cache -eviction and invalidation. +The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint and being more +efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a Raspberry Pi. It +makes extensive use of caching for everything used in the authentication chain to be as fast as possible. Most things +are even cached for several hours and special care has been taken into account in case of cache eviction and +invalidation. -A Rauthy deployment with the embedded SQLite, filled caches and a small set of clients and users configured typically -only uses **between 20 and 25 MB of memory**! This is pretty awesome when comparing it to other existing solutions -out there. If a password from a login is hashed, the memory consumption will of course go up way higher than this -depending on your configured Argon2ID parameters, which you got fully under control. +Rauthy comes in 2 flavors: with embedded [Hiqlite](https://github.com/sebadob/hiqlite), which is the default setting, +or you can optionally use a Postgres as your database, if you already have an instance running anyway. + +A deployment with the embedded [Hiqlite](https://github.com/sebadob/hiqlite), filled caches / buffers and a small set of +clients and users configured typically settles around 61MB of memory. Using Postgres, it will end up at ~36MB, but then +you have of course your Postgres consuming additional resources. This is pretty awesome when comparing it to other +existing solutions out there. If a password from a login is hashed, the memory consumption will of course go up way +higher than this, depending on your configured Argon2ID parameters. For achieving the speed and efficiency, some additional design tradeoffs were made. For instance, some things you configure statically via config file and not dynamically via UI, while most of them are configured once and then never @@ -64,37 +67,34 @@ touched again. ### Highly Available -Even though it makes extensive use of caching, you can run it in HA mode. It uses its own embedded distributed HA cache -called [redhac](https://crates.io/crates/redhac), which cares about cache eviction on remote hosts. -You can choose between a *SQLite* for single instance deployments and a *Postgres*, if you need HA. MySQL support might -come in the future. +Even though it makes extensive use of caching, you can run it in HA mode. [Hiqlite](https://github.com/sebadob/hiqlite) +creates its own embedded HA cache layer. A HA deployment is available with +both [Hiqlite](https://github.com/sebadob/hiqlite) and Postgres. ### Admin UI + User Account Dashboard -Unlike many other options, `rauthy` does have an Admin UI which can be used to basically do almost any operation you -might need to administrate the whole application and its users. There is also an account dashboard for each individual -user, where users will get a basic overview over their account and can self-manage som values, password, passkeys, and -so son. +Rauthy does have an Admin UI which can be used to basically do almost any operation you might need to administrate the +whole application and its users. There is also an account dashboard for each individual user, where users will get a +basic overview over their account and can self-manage som values, password, passkeys, and so on. Some Screenshots and further introduction will follow in the future. ### Client Branding -You have a simple way to create some kind of branding or stylized look for the Login page for each client. -The whole color theme can be changed and each client can have its own custom logo. -Additionally, if you modify the branding for the default `rauthy` client, it will not only change the look for the Login -page, but also for the Account and Admin page. +You have a simple way to create a branding or stylized look for the Login page for each client. The whole color theme +can be changed and each client can have its own custom logo. Additionally, if you modify the branding for the default +`rauthy` client, it will not only change the look for the Login page, but also for the Account and Admin page. ### Events and Auditing -Rauthy comes with an Event and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via +Rauthy comes with an Event- and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via E-Mail, Matrix or Slack, depending on the severity and the configured level. You will see them in the Admin UI in real time, or you can subscribe to the events stream and externally handle them depending on your own business logic. ### Brute-Force and basic DoS protection Rauthy has brute force and basic DoS protection for the login endpoint. Your timeout will be artificially delayed after -enough invalid logins. It does auto-balacklist IP's that exceeded too many invalid logins, with automatic -expiry of the blacklisting. You can, if you like, manually blacklist certain IP's as well via the Admin UI. +enough invalid logins. It auto-blacklists IPs that exceeded too many invalid logins, with automatic expiry of the +blacklisting. You can, if you like, manually blacklist certain IPs as well via the Admin UI. ### IoT Ready @@ -110,7 +110,7 @@ Benchmarks for v1.0.0 have not been done yet, but after some first basic tests a can confirm that Rauthy has no issues handling millions of users. The first very basic tests have been done with SQLite and ~11 million users. All parts and functions kept being fast and responsive with the only exception that the user-search in the admin UI was slowed down with such a high user count. It took ~2-3 seconds at that point to get a -result, which should be no issue at all so far (Postgres tests have not been done yet). +result, which should be no issue at all so far (Postgres tests have not been done yet). The only limiting factor at that point will be your configuration and needs for password hashing security. It really depends on how many resources you want to use for hashing (more resources == more secure) and how many concurrent logins at the exact same time you need to support. @@ -118,17 +118,16 @@ at the exact same time you need to support. ### Already in production Rauthy is already being used in production, and it works with all typical OIDC clients (so far). It was just not an -open source project for quite some time. -Keycloak was a rough inspiration in certain places and if something is working with Keycloak, it does with `rauthy` too -(again, so far). +open source project for quite some time. ### Features List - [x] Fully working OIDC provider -- [x] SQLite or Postgres as database +- [x] [Hiqlite](https://github.com/sebadob/hiqlite) or Postgres as database - [x] Fast and efficient with minimal footprint -- [x] Highly configurable - [x] Secure default values +- [x] Highly configurable +- [x] High-Availability - [x] True passwordless accounts with E-Mail + Magic Link + Passkey - [x] Dedicated Admin UI - [x] Account dashboard UI for each user with self-service @@ -156,11 +155,9 @@ Keycloak was a rough inspiration in certain places and if something is working w - [x] Optional Force MFA for each individual client - [x] Additional encryption inside the database for the most critical entries - [x] Automatic database backups with configurable retention and auto-cleanup (SQLite only) -- [x] auto-encrypted backups (SQLite) -- [x] Ability to push SQLite backups to S3 storage -- [x] auto-restore SQLite backups from file and s3 -- [x] High-Availability -- [x] HA cache layer with its own (optional) mTLS connection +- [x] auto-encrypted backups ([Hiqlite](https://github.com/sebadob/hiqlite) only) +- [x] Ability to push [Hiqlite](https://github.com/sebadob/hiqlite) backups to S3 storage +- [x] auto-restore [Hiqlite](https://github.com/sebadob/hiqlite) backups from file or s3 - [x] Username enumeration prevention - [x] Login / Password hashing rate limiting - [x] Session client peer IP binding @@ -176,7 +173,7 @@ Keycloak was a rough inspiration in certain places and if something is working w - [x] SwaggerUI documentation - [x] Configurable E-Mail templates for NewPassword + ResetPassword events - [x] Prometheus `/metrics` endpoint on separate port -- [x] No-Setup migrations between different databases (Yes, even between SQLite and Postgres) +- [x] No-Setup migrations between different databases (Yes, even between [Hiqlite](https://github.com/sebadob/hiqlite) + and Postgres) - [x] Can serve a basic `webid` document -- [x] Experimental FedCM support - +- [x] Experimental FedCM support \ No newline at end of file diff --git a/book/src/work/api_keys.md b/book/src/work/api_keys.md index 945c59138..f3f673e04 100644 --- a/book/src/work/api_keys.md +++ b/book/src/work/api_keys.md @@ -60,8 +60,3 @@ If you try to access an endpoint with an API Key that has insufficient access ri error message with description, which access rights you actually need. ![api key permission](../config/img/api_key_permission.png) - -```admonish hint -When you set up a fresh Rauthy instance, you have the option to [bootstrap](../config/bootstrap.md#api-key) an API Key, -which is the only situation where you are allowed to do it without an active Rauthy admin session. -``` \ No newline at end of file diff --git a/dev_notes.md b/dev_notes.md index 4ab8353ba..64a728993 100644 --- a/dev_notes.md +++ b/dev_notes.md @@ -2,14 +2,15 @@ ## CURRENT WORK -## Documentation TODO +## Before v0.27.0 release + +- check if `REFRESH_TOKEN_GRACE_TIME` can be dropped with Hiqlite +- randomize default admin user id on prod init + set email to `BOOTSTRAP_ADMIN_EMAIL` before password info logging + +### Documentation TODO -- breaking: only a single container from now on - breaking: add `USER_REG_OPEN_REDIRECT` to the book - `HealthResponse` response has been changed with Hiqlite -> breaking change -- database backup config has been changed slightly -- restore from backup has changed slightly -- write a small guide on how to migrate from existing sqlite to hiqlite ## Stage 1 - essentials diff --git a/docs/config/backup.html b/docs/config/backup.html index c492dc3c0..32d1e8003 100644 --- a/docs/config/backup.html +++ b/docs/config/backup.html @@ -178,71 +178,61 @@

Backups

Postgres

If you are using Postgres as the main database, Rauthy does not do any backups.
There are a lot of way better tools out there to handle this task.

-

SQLite

-

If Rauthy is using a SQLite, it does automatic backups, which can be configured with:

-
# Cron job for automatic data store backups (default: "0 0 4 * * * *")
-# sec min hour day_of_month month day_of_week year
-BACKUP_TASK="0 0 4 * * * *"
-
-# The name for the data store backups. The current timestamp will always be appended automatically.
-# default: rauthy-backup-
-BACKUP_NAME="rauthy-backup-"
-
-# All backups older than the specified hours will be cleaned up automatically (default: 720)
-BACKUP_RETENTION_LOCAL=24
+

Hiqlite

+

If Rauthy is using Hiqlite, it does automatic backups, which can be configured with:

+
# When the auto-backup task should run.
+# Accepts cron syntax:
+# "sec min hour day_of_month month day_of_week year"
+# default: "0 30 2 * * * *"
+HQL_BACKUP_CRON="0 30 2 * * * *"
+
+# Local backups older than the configured days will be cleaned up after
+# the backup cron job.
+# default: 30
+#HQL_BACKUP_KEEP_DAYS=30
+
+# Backups older than the configured days will be cleaned up locally
+# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`.
+# default: 3
+#HQL_BACKUP_KEEP_DAYS_LOCAL=3
 
-

All these backups are written inside the pod / container into /app/data/backup.
-The database itself will be saved in /app/data by default.

+

All these backups are written inside the pod / container into data/state_machine/backups.

This difference makes it possible, that you could add a second volume mount to the container.
You then have the database itself on a different disk than the backups, which is the most simple and straight forward -approach to have a basic backup strategy.

-
-
-
-

Info

-
- -
-
-

The SQLite backups are done with VACUUM, which means you can just use the backups as a normal database again. -This makes it possible, to just use the Database Migration feature to apply backups very easily.

-
-
+approach to have a basic backup strategy. However, it is recommended to use S3 for backups, especially for HA +deployments.

Remote Backups to S3 Storage

-

SQLite backups can be pushed to an S3 bucket after creation. This way you can keep only very low amount of local +

Hiqlite backups can be pushed to an S3 bucket after creation. This way, you can keep only very low amount of local backups and older ones on cheaper object storage.

Rauthy has been tested against MinIO and Garage S3 storage and is working fine with both, so I expect and standard S3 API to just work out of the box. You need to provide an Access Key + Secret with write access to an existing bucket and Rauthy will take care of the rest. All backups pushed to S3 will automatically encrypted with the currently active ENC_KEY_ACTIVE from the Rauthy config.

The configuration is done with the following values:

-
# The following section will only be taken into account, when
-# SQLite is used as the main database. If you use Postgres, you
-# should use Postgres native tooling like for instance `pgbackrest`
-# to manage your backups.
-# If S3 access is configured, your SQLite backups will be encrypted
-# and pushed into the configured bucket.
-#S3_URL=
-#S3_REGION=
-#S3_PATH_STYLE=false
-#S3_BUCKET=my_s3_bucket_name
-#S3_ACCESS_KEY=
-#S3_ACCESS_SECRET=
-#S3_DANGER_ACCEPT_INVALID_CERTS=false
+
# Access values for the S3 bucket where backups will be pushed to.
+#HQL_S3_URL=https://s3.example.com
+#HQL_S3_BUCKET=my_bucket
+#HQL_S3_REGION=example
+#HQL_S3_PATH_STYLE=true
+#HQL_S3_KEY=s3_key
+#HQL_S3_SECRET=s3_secret
 

Disaster Recovery

If you really lost all your data, you can easily restore automatically from the latest backup. This works with either a local file backup or with an encrypted remote backup on s3 storage (as long as you still have the ENC_KEY_ACTIVE that has been used for the remote backup).
-This, again, works only for SQLite. When you are using Postgres, you really should use native tooling which is way -better at this.

+This, again, works only for Hiqlite. When you are using Postgres, you should use Postgres native tooling like +pgBackRest which is way better at this.

The process is really simple:

-
    -
  • set an environment variable before the start
  • -
  • start up Rauthy
  • -
  • check the logs and wait for the backup to be finished
  • -
  • after a successful restore, Rauthy will start its normal operation
  • -
+
    +
  1. Have the cluster shut down. This is probably the case anyway, if you need to restore from a backup.
  2. +
  3. Provide a backup file name on S3 storage with the HQL_BACKUP_RESTORE value with prefix s3: (encrypted), or a file +on disk (plain sqlite file) with the prefix file:.
  4. +
  5. Start up Rauthy
  6. +
  7. Check the logs and wait for the backup to be finished
  8. +
  9. After a successful restore, Rauthy will start its normal operation
  10. +
  11. Make sure to remove the HQL_BACKUP_RESTORE env value.
  12. +

You only need to set this single value:

-
# Restores the given backup
-#
-# CAUTION: Be very careful with this option - it will overwrite
-# any existing database! The best way to use this option is to
-# provide it as an environment variable for a single start up
-# and then remove it directly after success.
-#
-# This only works when you are using a SQLite database!
-# If you are running on Postgres, you must use Postgres-native
-# tooling to handle your backups.
-#
-# You can either restore a local backup, or an encrypted one
-# from S3 storage.
-#
-# For restoring from a local backup, provide the folder name
-# of the backup you want to restore. Local SQLite backups are
-# always in `./data/backup/rauthy-backup-TIMESTAMP/` folders.
-# You only provide the backup folder name itself, in this case
-# it would be `rauthy-backup-TIMESTAMP` like this:
-# RESTORE_BACKUP=file:rauthy-backup-TIMESTAMP
-#
-# If you want to restore an encrypted backup from S3 storage,
-# you must provide the object name in the configured bucket.
-# For instance, let's say we have an object named
-# `rauthy-0.20.0-1703243039.cryptr` in our bucket, then the
-# format would be:
-# RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr
-#
-#RESTORE_BACKUP=
+
# If you ever need to restore from a backup, the process is simple.
+# 1. Have the cluster shut down. This is probably the case anyway, if
+#    you need to restore from a backup.
+# 2. Provide the backup file name on S3 storage with the
+#    HQL_BACKUP_RESTORE value.
+# 3. Start up the cluster again.
+# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE
+#    env value.
+#HQL_BACKUP_RESTORE=
 
diff --git a/docs/config/config.html b/docs/config/config.html index f9473f875..a3cbeb898 100644 --- a/docs/config/config.html +++ b/docs/config/config.html @@ -176,6 +176,30 @@

Rauthy Documentation

Reference Config

This shows a full example config with (hopefully) every value nicely described.

+

You can configure a lot here, but the most important variables you most-likely want to change when going into production +are the following. Lines beginning with !!! are absolutely critical. The order matches their location in the +reference config below.

+
    +
  • OPEN_USER_REG, USER_REG_DOMAIN_RESTRICTION
  • +
  • PEER_IP_HEADER_NAME - when behind a CDN
  • +
  • HQL_BACKUP_CRON, HQL_BACKUP_KEEP_DAYS, HQL_S3_URL, HQL_S3_BUCKET, HQL_S3_REGION, HQL_S3_PATH_STYLE, +HQL_S3_KEY, HQL_S3_SECRET - for Hiqlite backups, does not matter when using Postgres
  • +
  • BOOTSTRAP_ADMIN_EMAIL
  • +
  • HQL_NODE_ID_FROM or HQL_NODE_ID + HQL_NODES - HA only
  • +
  • !!! HQL_SECRET_RAFT + HQL_SECRET_API - set even when not using HA
  • +
  • DATABASE_URL + HIQLITE - if you want to use Postgres
  • +
  • RAUTHY_ADMIN_EMAIL
  • +
  • EMAIL_SUB_PREFIX, SMTP_URL, SMTP_USERNAME, SMTP_PASSWORD, SMTP_FROM
  • +
  • !!! ENC_KEY_ACTIVE + ENC_KEYS
  • +
  • MAX_HASH_THREADS
  • +
  • any target in the EVENTS / AUDIT section
  • +
  • !!! PUB_URL
  • +
  • PROXY_MODE + TRUSTED_PROXIES
  • +
  • TLS_CERT + TLS_KEY - if you don't terminate TLS on your reverse proxy
  • +
  • HQL_TLS_RAFT_KEY + HQL_TLS_RAFT_CERT+ HQL_TLS_API_KEY + HQL_TLS_API_CERT - if you want to internally encrypt +database / cache traffic
  • +
  • !!! RP_ID + RP_ORIGIN + RP_NAME
  • +
#####################################
@@ -194,8 +218,9 @@ 

Reference C ##################################### # If the User Registration endpoint should be accessible by anyone. -# If not, an admin must create each new user. (default: false) -#OPEN_USER_REG=true +# If not, an admin must create each new user. +# default: false +#OPEN_USER_REG=false # If set to true, the `/userinfo` endpoint will do additional validations. # The non-strict mode will fetch the user by id from the `sub` claim and make @@ -231,6 +256,23 @@

Reference C #evil.net #" +# If set to `true`, any validation of the `redirect_uri` provided during +# a user registration will be disabled. +# Clients can use this feature to redirect the user back to their application +# after a successful registration, so instead of ending up in the user +# dashboard, they come back to the client app that initiated the registration. +# +# The given `redirect_uri` will be compared against all registered +# `client_uri`s and will throw an error, if there is no match. However, +# this check will prevent ephemeral clients from using this feature. Only +# if you need it in combination with ephemeral clients, you should +# set this option to `true`. Otherwise it is advised to set the correct +# Client URI in the admin UI. The `redirect_uri` will be allowed if it starts +# with any registered `client_uri`. +# +# default: false +#USER_REG_OPEN_REDIRECT=true + # If set to true, a violation inside the CSRF protection middleware based # on Sec-* headers will block invalid requests. Usually you always want this # enabled. You may only set it to false during the first testing phase if you @@ -257,7 +299,7 @@

Reference C # disabled, this feature will not work. You can validate the IPs for each session # in the Admin UI. If these are correct, your setup is okay. # -# (default: true) +# default: true #SESSION_VALIDATE_IP=true # By default, Rauthy will log a warning into the logs, if an active password @@ -279,7 +321,7 @@

Reference C # Cloudflare, which adds custom headers in this case. # For instance, if your requests are proxied through cloudflare, your would # set `CF-Connecting-IP`. -PEER_IP_HEADER_NAME="CF-Connecting-IP" +#PEER_IP_HEADER_NAME="CF-Connecting-IP" # You can enable authn/authz headers which would be added to the response # of the `/auth/v1/oidc/forward_auth` endpoint. With `AUTH_HEADERS_ENABLE=true`, @@ -289,27 +331,27 @@

Reference C # However, be careful when using this, since this kind of authn/authz has # a lot of pitfalls out of the scope of Rauthy. # default: false -AUTH_HEADERS_ENABLE=true +#AUTH_HEADERS_ENABLE=true # Configure the header names being used for the different values. # You can change them to your needs, if you cannot easily change your # downstream apps. # default: x-forwarded-user -AUTH_HEADER_USER=x-forwarded-user +#AUTH_HEADER_USER=x-forwarded-user # default: x-forwarded-user-roles -AUTH_HEADER_ROLES=x-forwarded-user-roles +#AUTH_HEADER_ROLES=x-forwarded-user-roles # default: x-forwarded-user-groups -AUTH_HEADER_GROUPS=x-forwarded-user-groups +#AUTH_HEADER_GROUPS=x-forwarded-user-groups # default: x-forwarded-user-email -AUTH_HEADER_EMAIL=x-forwarded-user-email +#AUTH_HEADER_EMAIL=x-forwarded-user-email # default: x-forwarded-user-email-verified -AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified +#AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified # default: x-forwarded-user-family-name -AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name +#AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name # default: x-forwarded-user-given-name -AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name +#AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name # default: x-forwarded-user-mfa -AUTH_HEADER_MFA=x-forwarded-user-mfa +#AUTH_HEADER_MFA=x-forwarded-user-mfa # You can set different security levels for Rauthy's cookies. # The safest option would be 'host', but may not be desirable when @@ -348,61 +390,39 @@

Reference C ############# BACKUPS ############### ##################################### -# Cron job for automatic data store backups (default: "0 0 4 * * * *") -# sec min hour day_of_month month day_of_week year -#BACKUP_TASK="0 0 4 * * * *" - -# The name for the data store backups. The current timestamp -# will always be appended automatically. (default: rauthy-backup-) -#BACKUP_NAME="rauthy-backup-" - -# All backups older than the specified hours will be cleaned up -# automatically (default: 720) -#BACKUP_RETENTION_LOCAL=720 - -# The following section will only be taken into account, when -# SQLite is used as the main database. If you use Postgres, you -# should use Postgres native tooling like for instance `pgbackrest` -# to manage your backups. -# If S3 access is configured, your SQLite backups will be encrypted -# and pushed into the configured bucket. -#S3_URL= -#S3_REGION= -#S3_PATH_STYLE=false -#S3_BUCKET=my_s3_bucket_name -#S3_ACCESS_KEY= -#S3_ACCESS_SECRET= -#S3_DANGER_ALLOW_INSECURE=false - -# Restores the given backup -# -# CAUTION: Be very careful with this option - it will overwrite -# any existing database! The best way to use this option is to -# provide it as an environment variable for a single start up -# and then remove it directly after success. -# -# This only works when you are using a SQLite database! -# If you are running on Postgres, you must use Postgres-native -# tooling to handle your backups. -# -# You can either restore a local backup, or an encrypted one -# from S3 storage. -# -# For restoring from a local backup, provide the folder name -# of the backup you want to restore. Local SQLite backups are -# always in `./data/backup/rauthy-backup-TIMESTAMP/` folders. -# You only provide the backup folder name itself, in this case -# it would be `rauthy-backup-TIMESTAMP` like this: -# RESTORE_BACKUP=file:rauthy-backup-TIMESTAMP -# -# If you want to restore an encrypted backup from S3 storage, -# you must provide the object name in the configured bucket. -# For instance, let's say we have an object named -# `rauthy-0.20.0-1703243039.cryptr` in our bucket, then the -# format would be: -# RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr -# -#RESTORE_BACKUP= +# When the auto-backup task should run. +# Accepts cron syntax: +# "sec min hour day_of_month month day_of_week year" +# default: "0 30 2 * * * *" +#HQL_BACKUP_CRON="0 30 2 * * * *" + +# Local backups older than the configured days will be cleaned up after +# the backup cron job. +# default: 30 +#HQL_BACKUP_KEEP_DAYS=30 + +# Backups older than the configured days will be cleaned up locally +# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`. +# default: 3 +#HQL_BACKUP_KEEP_DAYS_LOCAL=3 + +# If you ever need to restore from a backup, the process is simple. +# 1. Have the cluster shut down. This is probably the case anyway, if +# you need to restore from a backup. +# 2. Provide the backup file name on S3 storage with the +# HQL_BACKUP_RESTORE value. +# 3. Start up the cluster again. +# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE +# env value. +#HQL_BACKUP_RESTORE= + +# Access values for the S3 bucket where backups will be pushed to. +HQL_S3_URL=https://s3.example.com +HQL_S3_BUCKET=my_bucket +HQL_S3_REGION=example +HQL_S3_PATH_STYLE=true +HQL_S3_KEY=s3_key +HQL_S3_SECRET=s3_secret ##################################### ############ BOOTSTRAP ############## @@ -410,7 +430,7 @@

Reference C # If set, the email of the default admin will be changed # during the initialization of an empty production database. -#BOOTSTRAP_ADMIN_EMAIL=admin@localhost.de +BOOTSTRAP_ADMIN_EMAIL=admin@localhost.de # If set, this plain text password will be used for the # initial admin password instead of generating a random @@ -478,7 +498,7 @@

Reference C #BOOTSTRAP_API_KEY_SECRET=twUA2M7RZ8H3FyJHbti2AcMADPDCxDqUKbvi8FDnm3nYidwQx57Wfv6iaVTQynMh ##################################### -############## CACHE ################ +############# CLUSTER ############### ##################################### # Can be set to 'k8s' to try to split off the node id from the hostname @@ -499,17 +519,20 @@

Reference C # id addr_raft addr_api # id addr_raft addr_api # -# 2 nodes must be separated by 2 `\n` HQL_NODES=" 1 localhost:8100 localhost:8200 " -# If set to `true`, all SQL statements will be logged for debugging -# purposes. -# default: false -HQL_LOG_STATEMENTS=true +# Sets the limit when the Raft will trigger the creation of a new +# state machine snapshot and purge all logs that are included in +# the snapshot. +# Higher values can achieve more throughput in very write heavy +# situations but will end up in more disk usage and longer +# snapshot creations / log purges. +# default: 10000 +#HQL_LOGS_UNTIL_SNAPSHOT=10000 -# Secrets for Raft internal authentication as well as for the Hiqlite API. +# Secrets for Raft internal authentication as well as for the API. # These must be at least 16 characters long and you should provide # different ones for both variables. HQL_SECRET_RAFT=SuperSecureSecret1337 @@ -525,40 +548,97 @@

Reference C ############ DATABASE ############### ##################################### -# The database driver will be chosen at runtime depending on -# the given DATABASE_URL format. Examples: -# Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:' -# Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName' +# Connection string to connect to a Postgres database. +# This will be ignored as long as `HIQLITE=true`. # -# NOTE: The password in this case should be alphanumeric. Special -# characters could cause problems in the connection string. +# Format: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName' # -# CAUTION: -# To make the automatic migrations work with Postgres15, when -# you do not want to just use the `postgres` user, You need -# to have a user with the same name as the DB / schema. For -# instance, the following would work without granting extra -# access to the `public` schema which is disabled by default -# since PG15: +# NOTE: The password in this case should be alphanumeric. +# Special characters could cause problems in the connection string. # +# CAUTION: To make the automatic migrations work with Postgres 15+, +# when you do not want to just use the `postgres` user, You need +# to have a user with the same name as the DB / schema. For instance, +# the following would work without granting extra access to the +# `public` schema which is disabled by default since PG15: # database: rauthy # user: rauthy # schema: rauthy with owner rauthy # -#DATABASE_URL=sqlite::memory: -#DATABASE_URL=sqlite:data/rauthy.db #DATABASE_URL=postgresql://rauthy:123SuperSafe@localhost:5432/rauthy -# Max DB connections - irrelevant for SQLite (default: 5) -#DATABASE_MAX_CONN=5 +# Max DB connections for the Postgres pool. +# Irrelevant for Hiqlite. +# default: 20 +#DATABASE_MAX_CONN=20 -# If specified, the current Database, set with DATABASE_URL, -# will be DELETED and OVERWRITTEN with a migration from the -# given database with this variable. Can be used to migrate -# between different databases. -# +# If specified, the currently configured Database will be +# DELETED and OVERWRITTEN with a migration from the given +# database with this variable. Can be used to migrate between +# different databases. +# To migrate from Hiqlite, use the `sqlite:` prefix. +# # !!! USE WITH CARE !!! -#MIGRATE_DB_FROM=sqlite:data/rauthy.db +# +#MIGRATE_DB_FROM=sqlite:data/state_machine/db/hiqlite.db +#MIGRATE_DB_FROM=postgresql://postgres:123SuperSafe@localhost:5432/rauthy + +# Hiqlite is the default database for Rauthy. +# You can opt-out and use Postgres instead by setting the proper +# `DATABASE_URL=postgresql://...` by setting `HIQLITE=false` +# default: true +#HIQLITE=true + +# The data dir hiqlite will store raft logs and state machine data in. +# default: data +#HQL_DATA_DIR=data + +# The file name of the SQLite database in the state machine folder. +# default: hiqlite.db +#HQL_FILENAME_DB=hiqlite.db + +# If set to `true`, all SQL statements will be logged for debugging +# purposes. +# default: false +#HQL_LOG_STATEMENTS=false + +# The size of the pooled connections for local database reads. +# +# Do not confuse this with a pool size for network databases, as it +# is much more efficient. You can't really translate between them, +# because it depends on many things, but assuming a factor of 10 is +# a good start. This means, if you needed a (read) pool size of 40 +# connections for something like a postgres before, you should start +# at a `read_pool_size` of 4. +# +# Keep in mind that this pool is only used for reads and writes will +# travel through the Raft and have their own dedicated connection. +# +# default: 4 +#HQL_READ_POOL_SIZE=4 + +# Enables immediate flush + sync to disk after each Log Store Batch. +# The situations where you would need this are very rare, and you +# should use it with care. +# +# The default is `false`, and a flush + sync will be done in 200ms +# intervals. Even if the application should crash, the OS will take +# care of flushing left-over buffers to disk and no data will get +# lost. Only if something worse happens, you might lose the last +# 200ms of commits. +# +# The only situation where you might want to enable this option is +# when you are on a host that might lose power out of nowhere, and +# it has no backup battery, or when your OS / disk itself is unstable. +# +# `sync_immediate` will greatly reduce the write throughput and put +# a lot more pressure on the disk. If you have lots of writes, it +# can pretty quickly kill your SSD for instance. +#HQL_SYNC_IMMEDIATE=false + +# The password for the Hiqlite dashboard as Argon2ID hash. +# '123SuperMegaSafe' in this example +#HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0zMix0PTIscD0xJE9FbFZURnAwU0V0bFJ6ZFBlSEZDT0EkTklCN0txTy8vanB4WFE5bUdCaVM2SlhraEpwaWVYOFRUNW5qdG9wcXkzQQ== # Defines the time in seconds after which the `/health` endpoint # includes HA quorum checks. The initial delay solves problems @@ -596,12 +676,12 @@

Reference C # Grant flow. You may increase the default of 300 seconds, if you have # "slow users" and they are simply not fast enough with the verification. # default: 300 -DEVICE_GRANT_CODE_LIFETIME=300 +#DEVICE_GRANT_CODE_LIFETIME=300 # The length of the `user_code` the user has to enter manually for # auth request validation. This must be < 64 characters. # default: 8 -DEVICE_GRANT_USER_CODE_LENGTH=8 +#DEVICE_GRANT_USER_CODE_LENGTH=8 # Specifies the rate-limit in seconds per IP for starting new Device # Authorization Grant flows. This is especially important for public @@ -611,19 +691,19 @@

Reference C # If you use the `device_code` grant with confidential clients only, # you can leave this unset, which will not rate-limit the endpoint. # default: not set -DEVICE_GRANT_RATE_LIMIT=1 +#DEVICE_GRANT_RATE_LIMIT=1 # The interval in seconds which devices are told to use when they # poll the token endpoint during Device Authorization Grant flow. # default: 5 -DEVICE_GRANT_POLL_INTERVAL=5 +#DEVICE_GRANT_POLL_INTERVAL=5 # You can define a global lifetime in hours for refresh tokens issued # from a Device Authorization Grant flow. You might want to have a # higher lifetime than normal refresh tokens, because they might be # used in IoT devices which may be offline for longer periods of time. # default: 72 -DEVICE_GRANT_REFRESH_TOKEN_LIFETIME=72 +#DEVICE_GRANT_REFRESH_TOKEN_LIFETIME=72 ##################################### ############## DPOP ################# @@ -632,14 +712,14 @@

Reference C # May be set to 'false' to disable forcing the usage of # DPoP nonce's. # default: true -DPOP_FORCE_NONCE=true +#DPOP_FORCE_NONCE=true # Lifetime in seconds for DPoP nonces. These are used to # limit the lifetime of a client's DPoP proof. Do not set # lower than 30 seconds to avoid too many failed client # token requests. # default: 900 -DPOP_NONCE_EXP=900 +#DPOP_NONCE_EXP=900 ##################################### ########## DYNAMIC CLIENTS ########## @@ -808,7 +888,7 @@

Reference C # https://sebadob.github.io/rauthy/config/argon2.html # M_COST should never be below 32768 in production ARGON2_M_COST=131072 -# T_COST should never be below 1 in production +# T_COST must be greater than 0 ARGON2_T_COST=4 # P_COST should never be below 2 in production ARGON2_P_COST=8 @@ -1099,7 +1179,6 @@

Reference C #LOG_LEVEL=info # The log level for the `Hiqlite` persistence layer. -# At the time of writing, only the cache will use `hiqlite` # default: info LOG_LEVEL_DATABASE=info @@ -1293,7 +1372,7 @@

Reference C ############### TLS ################# ##################################### -## Rauthy TLS +## UI + API TLS # Overwrite the path to the TLS certificate file in PEM # format for rauthy (default: tls/tls.crt) @@ -1304,7 +1383,7 @@

Reference C # (default: tls/tls.key) #TLS_KEY=tls/tls.key -## CACHE TLS +## Database / Cache internal TLS # If given, these keys / certificates will be used to establish # TLS connections between nodes. diff --git a/docs/config/db_migration.html b/docs/config/db_migration.html index 438a7476c..bc3aa89a3 100644 --- a/docs/config/db_migration.html +++ b/docs/config/db_migration.html @@ -175,19 +175,20 @@

Rauthy Documentation

Database Migrations

-

You can migrate easily between SQLite and Postgres, or just between different instances of them.

-

Let's say you started out by evaluating Rauthy with a SQLite and a single instance deployment. Later on, you want to -migrate to a HA setup, which requires you to use a Postgres.

+

You can migrate easily between Hiqlite and Postgres, or just between different instances of the same database.

+

Let's say you started out by evaluating Rauthy with the default Hiqlite and a single instance deployment. Later on, you +want to migrate to Postgres for whatever reason. Or you started with Postgres and yuo want to reduce your memory +footprint by switching to Hiqlite. All of this is easily possible.

Solution: MIGRATE_DB_FROM

If you set the MIGRATE_DB_FROM in Rauthy's config, it will perform a migration at the next restart.
The way it works is the following:

  1. At startup, have a look if MIGRATE_DB_FROM is configured
  2. If yes, then connect to the given database
  3. -
  4. At the same time, connect to the database specified in the DATABASE_URL
  5. -
  6. Overwrite all existing data in DATABASE_URL with the data from the MIGRATE_DB_FROM database
  7. +
  8. At the same time, connect to the database specified via HIQLITE and DATABASE_URL
  9. +
  10. Overwrite all existing data in the target database with the data from the MIGRATE_DB_FROM source
  11. Close the connection to MIGRATE_DB_FROM
  12. -
  13. Use the DATABASE_URL as the new database and start normal operation
  14. +
  15. Start normal operation
@@ -210,8 +211,8 @@

Datab

-

v0.14 and beyond: if you want to migrate to a different database, for instance from SQLite to Postgres, you need to -switch to the correct rauthy image as well. Rauthy v0.14 and beyond has different container images for the databases.

+

any version below 0.27.0: if you want to migrate to a different database, for instance from SQLite to Postgres, you need to +switch to the correct Rauthy image as well.

diff --git a/docs/config/ha.html b/docs/config/ha.html index 523298503..dc20035b2 100644 --- a/docs/config/ha.html +++ b/docs/config/ha.html @@ -176,12 +176,11 @@

Rauthy Documentation

High Availability

Rauthy is capable of running in a High Availability Mode (HA).

-

Some values, like authentication codes for instance, do live in the cache only. Additionally, there might come an -option with a future version which offers a special in-memory only mode in some situations.

-

Because of this, all instances create and share a single HA cache layer, which means at the same time, that you cannot -just scale up the replicas infinitely without adjusting the config. The optimal amount of replicas for a HA mode would -be 3, or if you need even higher resilience 5. More replicas should work just fine, but this has never been really -tested and the performance will degrade at some point.

+

Some values, like authentication codes for instance, do live in the cache only. Because of this, all instances create +and share a single HA cache layer, which means at the same time, that you cannot just scale up the replicas infinitely +without adjusting the config. The optimal amount of replicas for a HA mode would be 3, or if you need even higher +resilience 5. More replicas should work just fine, but this has never been really tested and the latency will +increase at some point.

The Cache layer uses another project of mine called Hiqlite. It uses the Raft algorithm under the hood to achieve consistency.

Configuration

Earlier versions of Rauthy have been using redhac for the HA cache layer. While -redhac was working fine, it had a few design issues I wanted to get rid of. Since v0.26.0, Rauthy uses the above -mentioned Hiqlite instead. You only need to configure a few variables:

+redhac was working fine, it had a few design issues I wanted to get rid of. Since v0.26.0, Rauthy uses the +above-mentioned Hiqlite instead. You only need to configure a few variables:

HQL_NODE_ID

The HQL_NODE_ID is mandatory, even for a single replica deployment with only a single node in HQL_NODES. If you deploy Rauthy as a StatefulSet inside Kubernetes, you can ignore this value and just set HQL_NODE_ID_FROM -below. If you deploy anywere else or you are not using a StatefulSet, you need to set the HQL_NODE_ID to tell Rauthy +below. If you deploy anywhere else, or you are not using a StatefulSet, you need to set the HQL_NODE_ID to tell Rauthy which node of the Raft cluster it should be.

# The node id must exist in the nodes and there must always be
 # at least a node with ID 1
@@ -219,8 +218,8 @@ 

HQL_N #HQL_NODE_ID_FROM=k8s

HQL_NODES

-

Using this value, you defined the Cache / Raft members. This must be given even if you just deploy a single instance. -The description from the reference config should be clear enough:

+

This value defines the Cache / Raft members. It must be given even if you just deploy a single instance. The description +from the reference config should be clear enough:

# All cluster member nodes.
 # To make setting the env var easy, the values are separated by `\s`
 # while nodes are separated by `\n`
diff --git a/docs/config/logging.html b/docs/config/logging.html
index d5baad214..9c625dc4d 100644
--- a/docs/config/logging.html
+++ b/docs/config/logging.html
@@ -190,6 +190,16 @@ 

LOG_LEVEL

+

LOG_LEVEL_DATABASE

+

The Hiqlite database logging is at the time of writing pretty verbose on purpose. The whole persistence layer with the +Raft cluster setup has been written from the ground up. The amount of logging will be reduced in later versions, when +the whole layer has been proven to be really solid, but for now you get more information just in case you need to debug +something.

+

You can reduce the default logging and for instance set it to warn or error only.

+
# The log level for the `Hiqlite` persistence layer.
+# default: info
+LOG_LEVEL_DATABASE=info
+

LOG_LEVEL_ACCESS

For changing the logging behavior for access logs to the API endpoints, you will need to set the LOG_LEVEL_ACCESS. If you have access logging configured at your firewall or reverse proxy, you can disable the LOG_LEVEL_ACCESS fully @@ -213,6 +223,18 @@

LOG_L # default: Modifying LOG_LEVEL_ACCESS=Basic

+

LOG_FMT

+

Rauthy can output logs as JSON data with the following variable:

+
# You can change the log output format to JSON, if you set:
+# `LOG_FMT=json`.
+# Keep in mind, that some logs will include escaped values,
+# for instance when `Text` already logs a JSON in debug level.
+# Some other logs like an Event for instance will be formatted 
+# as Text anyway. If you need to auto-parse events, please consider 
+# using an API token and listen ot them actively.
+# default: text
+#LOG_FMT=text
+

Events

Events are used for auditing and never miss anything. If something important happens, you usually need to inspect logs to catch it, but why should you, if you did not notice any problems? This is where Rauthy Events are helping you out. @@ -244,13 +266,16 @@

Matrix

# you should provide `EVENT_MATRIX_ACCESS_TOKEN`. # If both are given, the `EVENT_MATRIX_ACCESS_TOKEN` will be preferred. # -# If left empty, no messages will be sent to Matrix. +# If left empty, no messages will not be sent to Matrix. # Format: `@<user_id>:<server address>` #EVENT_MATRIX_USER_ID= # Format: `!<random string>:<server address>` #EVENT_MATRIX_ROOM_ID= #EVENT_MATRIX_ACCESS_TOKEN= #EVENT_MATRIX_USER_PASSWORD= +# URL of your Matrix server. +# default: https://matrix.org +#EVENT_MATRIX_SERVER_URL=https://matrix.org # Optional path to a PEM Root CA certificate file for the Matrix client. #EVENT_MATRIX_ROOT_CA_PATH=path/to/my/root_ca_cert.pem # May be set to disable the TLS validation for the Matrix client. diff --git a/docs/config/tls.html b/docs/config/tls.html index 746f933c2..8201719eb 100644 --- a/docs/config/tls.html +++ b/docs/config/tls.html @@ -236,9 +236,10 @@

Config

The reference config contains a TLS section with all the values you can set.

For this example, we will be using the same certificates for both the internal cache mTLS connections and the public facing HTTPS server.

-

Cache

-

The cache layer (optionally) uses TLS, if you provide certificates. Simply provide the following values from the TLS -section in the reference config:

+

Hiqlite

+

Hiqlite can run the whole database layer, and it will always take care of caching. It can be configured to use TLS +internally, if you provide certificates. Simply provide the following values from the TLS section in the reference +config:

# If given, these keys / certificates will be used to establish
 # TLS connections between nodes.
 HQL_TLS_RAFT_KEY=tls/hiqlite/tls.key
diff --git a/docs/config/user_reg.html b/docs/config/user_reg.html
index 8265c9f05..6212ad20f 100644
--- a/docs/config/user_reg.html
+++ b/docs/config/user_reg.html
@@ -294,19 +294,26 @@ 

Redirect Hints<

This makes it possible to use Rauthy as your upstream provider without the user really needing to interact with or know about it in detail, which again leads to less confusion.

-
- -
-

If you want to complete this improved UX setup, you should set a Client URI for the client in the admin dashboard. -When there is a valid value, a small home icon will be shown inside the login form, so a user can get back to the -client's URI without possibly screwing up with incorrectly using the browsers back button.

-
-
+

By default, the allowed redirect_uris are restricted to all existing client_uris in the database. They will be +compared via client_uri.startsWith(redirect_uri). If you want to opt-out of the additional redirect_uri checks and +configure and open redirect to allow just anything, you can do so:

+
# If set to `true`, any validation of the `redirect_uri` provided during
+# a user registration will be disabled.
+# Clients can use this feature to redirect the user back to their application
+# after a successful registration, so instead of ending up in the user
+# dashboard, they come back to the client app that initiated the registration.
+#
+# The given `redirect_uri` will be compared against all registered
+# `client_uri`s and will throw an error, if there is no match. However,
+# this check will prevent ephemeral clients from using this feature. Only
+# if you need it in combination with ephemeral clients, you should
+# set this option to `true`. Otherwise it is advised to set the correct
+# Client URI in the admin UI. The `redirect_uri` will be allowed if it starts
+# with any registered `client_uri`.
+#
+# default: false
+#USER_REG_OPEN_REDIRECT=true
+

Custom Frontend

Depending on your application, you may want to create your own frontend for the registration. For speed and efficiency reasons, Rauthy does not allow you to overwrite the existing templates, but you can host your own UI of course.

@@ -325,10 +332,10 @@

Custom Fronte #[validate(email)] email: String, /// Validation: `[a-zA-Z0-9Γ€-ΓΏ-\\s]{1,32}` - #[validate(regex(path = "*RE_USER_NAME", code = "[a-zA-Z0-9Γ€-ΓΏ-\\s]{1,32}"))] + #[validate(regex(path = "*RE_USER_NAME", code = "[a-zA-Z0-9Γ€-ΕΏ-\\s]{1,32}"))] family_name: String, /// Validation: `[a-zA-Z0-9Γ€-ΓΏ-\\s]{1,32}` - #[validate(regex(path = "*RE_USER_NAME", code = "[a-zA-Z0-9Γ€-ΓΏ-\\s]{1,32}"))] + #[validate(regex(path = "*RE_USER_NAME", code = "[a-zA-Z0-9Γ€-ΕΏ-\\s]{1,32}"))] given_name: String, /// Validation: `[a-zA-Z0-9,.:/_\-&?=~#!$'()*+%]+` #[validate(regex(path = "*RE_URI", code = "[a-zA-Z0-9,.:/_\\-&?=~#!$'()*+%]+"))] diff --git a/docs/getting_started/docker.html b/docs/getting_started/docker.html index ee6d30248..90a274952 100644 --- a/docs/getting_started/docker.html +++ b/docs/getting_started/docker.html @@ -1,482 +1,361 @@ - - - - Docker - Rauthy Documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - + + + - - - - - + + + + + + + + + + + + + - -
- -
- - - + - - - + + + - - - + + + - + -
- +
+ diff --git a/docs/getting_started/k8s.html b/docs/getting_started/k8s.html index 017cabf7f..93d2939c1 100644 --- a/docs/getting_started/k8s.html +++ b/docs/getting_started/k8s.html @@ -218,9 +218,7 @@

@@ -247,19 +245,8 @@

@@ -321,13 +333,15 @@

Create and apply the stateful set

touch sts.yaml
 
@@ -341,11 +355,15 @@

TLS Certifi apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: - name: rauthy-https + name: rauthy-http namespace: rauthy spec: entryPoints: @@ -609,6 +642,24 @@

TLS Certifi - name: rauthy port: 8080

+

Hiqlite Internal TLS

+

You can of course also provide TLS certificates for the Hiqlite internal communication. Two Independent networks are +created: one for the Raft-Internal network traffic like heartbeats and data replication, and a second one for the +"external" Hiqlite API. This is used by other Hiqlite cluster members for management purposes and to execute things +like consistent queries on the leader node.

+

You can provide TLS certificates for both of them independently via the following config variables:

+
## Hiqlite TLS
+
+# If given, these keys / certificates will be used to establish
+# TLS connections between nodes.
+HQL_TLS_RAFT_KEY=tls/key.pem
+HQL_TLS_RAFT_CERT=tls/cert-chain.pem
+HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=true
+
+HQL_TLS_API_KEY=tls/key.pem
+HQL_TLS_API_CERT=tls/cert-chain.pem
+HQL_TLS_API_DANGER_TLS_NO_VERIFY=true
+

Additional steps

There are a few more things to do when going into production, but these are the same for Kubernetes and Docker and will be explained in later chapters.

diff --git a/docs/getting_started/main.html b/docs/getting_started/main.html index 100cae70e..a7c9f7ba5 100644 --- a/docs/getting_started/main.html +++ b/docs/getting_started/main.html @@ -176,12 +176,14 @@

Rauthy Documentation

Getting Started

Choose A Database

-

You only need to answer a single question to decide, which database you should use:

-

Do you want / need a HA deployment?

-

If the answer is Yes, choose Postgres, otherwise choose SQLite.

-

SQLite is no performance bottleneck at all. After some first very rough tests, it does not have problems with even -millions of users. The bottleneck will always be the password hashing algorithm settings, your needs for how secure -it should be and how many concurrent logins you want to be able to handle (more on that later).

+

Rauthy's default database is Hiqlite. Under the hood, it's using SQLite, but it +adds an additional layer on top making it highly-available using +the Raft Consensus Algorithm. Don't let the SQLite engine under the hood fool you, it will +handle most probably anything you throw at it, as long as your disks are fast enough. Hiqlite can easily saturate a +1GBit/s network connection with just database (write) traffic. All reads are local, which means they are way faster than +with Postgres in any scenario.

+

If you already have a Postgres up an running with everything set up anyway, you might want to choose it as your main DB, +but I do not recommend deploying a Postgres instance just for Rauthy. This would be a waste of precious resources.

@@ -190,21 +192,14 @@

Choose A

-

If you want to migrate from Postgres to SQLite at a later point, you can do this at any time very easily. +

If you want to migrate between databases at a later point, you can do this at any time very easily. Just take a look at the Reference Config and the variable MIGRATE_DB_FROM.

Container Images

-

Rauthy comes with different container images. The difference between them is not only x86 vs arm64, but the database -driver under the hood as well. The reason is, that almost all SQL queries are checked at compile time. To make this -possible, different images need to be created. Apart from the database driver, there is no difference between them. -You also can't use the "wrong" image by accident. If you try to use a Postgres image with a SQLite database URL and -vice versa, Rauthy will yell at you at startup and panic on purpose.

-
    -
  • The "normal" container images can be used for Postgres
  • -
  • The *-lite images use an embedded SQLite
  • -
  • The MIGRATE_DB_FROM (explained later) can be used with any combination of image / database
  • -
+

Rauthy versions before 0.27.0 had different container images depending on the database you choose. However, this is +not the case anymore. There is only a single image which works with any configuration. It works on x86_64 and arm64 +architectures.

At the time of writing, you can run Rauthy either with Docker or inside Kubernetes.
Both Getting Started guides do not cover all set up you might want to do for going into production. Especially the Docker guide is more for testing.
diff --git a/docs/index.html b/docs/index.html index 748660a34..f33c4d513 100644 --- a/docs/index.html +++ b/docs/index.html @@ -205,14 +205,13 @@

What it is

Secure by default

It tries to be as secure as possible by default while still providing all the options needed to be compatible with older systems. For instance, if you create a new OIDC client, it activates ed25519 as the default algorithm for -token signing and S256 PKCE flow. This will not work with old clients, which do not support it, but you can of course +token signing and S256 PKCE flow. This will not work with clients, which do not support it, but you can of course deactivate this to your liking.

MFA and Passwordless Login

Option 1:
Password + Security Key (without User Verification):
Rauthy provides FIDO 2 / Webauthn login flows. If you once logged in on a new client with your username + password, -you -will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to +you will get an encrypted cookie which will allow you to log in without a password from that moment on. You only need to have a FIDO compliant Passkey being registered for your account.

Option 2:
Passkey-Only Accounts:
@@ -227,49 +226,49 @@

-

Discoverable credentials are discouraged with Rauthy. This means you will need to enter your E-Mail for the login -(which will be auto-filled after the first one), but Rauthy passkeys do not use any storage on your device. For instance -when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even having full support.

+

Discoverable credentials are discouraged with Rauthy (for good reason). This means you will need to enter your E-Mail +for the login (which will be autofilled after the first one), but Rauthy passkeys do not use any storage on your +device. For instance when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even +having full support.

Fast and efficient

-

The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint -and being more efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a -Raspberry Pi. It makes extensive use of caching to be as fast as possible in cases where your database is further -away or just a bit slower, because it is maybe running on an SBC from an SD card or in the cloud with the lowest storage -bandwidth. Most things are even cached for several hours and special care has been taken into account in case of cache -eviction and invalidation.

-

A Rauthy deployment with the embedded SQLite, filled caches and a small set of clients and users configured typically -only uses between 20 and 25 MB of memory! This is pretty awesome when comparing it to other existing solutions -out there. If a password from a login is hashed, the memory consumption will of course go up way higher than this -depending on your configured Argon2ID parameters, which you got fully under control.

+

The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint and being more +efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a Raspberry Pi. It +makes extensive use of caching for everything used in the authentication chain to be as fast as possible. Most things +are even cached for several hours and special care has been taken into account in case of cache eviction and +invalidation.

+

Rauthy comes in 2 flavors: with embedded Hiqlite, which is the default setting, +or you can optionally use a Postgres as your database, if you already have an instance running anyway.

+

A deployment with the embedded Hiqlite, filled caches / buffers and a small set of +clients and users configured typically settles around 61MB of memory. Using Postgres, it will end up at ~36MB, but then +you have of course your Postgres consuming additional resources. This is pretty awesome when comparing it to other +existing solutions out there. If a password from a login is hashed, the memory consumption will of course go up way +higher than this, depending on your configured Argon2ID parameters.

For achieving the speed and efficiency, some additional design tradeoffs were made. For instance, some things you configure statically via config file and not dynamically via UI, while most of them are configured once and then never touched again.

Highly Available

-

Even though it makes extensive use of caching, you can run it in HA mode. It uses its own embedded distributed HA cache -called redhac, which cares about cache eviction on remote hosts. -You can choose between a SQLite for single instance deployments and a Postgres, if you need HA. MySQL support might -come in the future.

+

Even though it makes extensive use of caching, you can run it in HA mode. Hiqlite +creates its own embedded HA cache layer. A HA deployment is available with +both Hiqlite and Postgres.

Admin UI + User Account Dashboard

-

Unlike many other options, rauthy does have an Admin UI which can be used to basically do almost any operation you -might need to administrate the whole application and its users. There is also an account dashboard for each individual -user, where users will get a basic overview over their account and can self-manage som values, password, passkeys, and -so son.
+

Rauthy does have an Admin UI which can be used to basically do almost any operation you might need to administrate the +whole application and its users. There is also an account dashboard for each individual user, where users will get a +basic overview over their account and can self-manage som values, password, passkeys, and so on.
Some Screenshots and further introduction will follow in the future.

Client Branding

-

You have a simple way to create some kind of branding or stylized look for the Login page for each client.
-The whole color theme can be changed and each client can have its own custom logo.
-Additionally, if you modify the branding for the default rauthy client, it will not only change the look for the Login -page, but also for the Account and Admin page.

+

You have a simple way to create a branding or stylized look for the Login page for each client. The whole color theme +can be changed and each client can have its own custom logo. Additionally, if you modify the branding for the default +rauthy client, it will not only change the look for the Login page, but also for the Account and Admin page.

Events and Auditing

-

Rauthy comes with an Event and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via +

Rauthy comes with an Event- and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via E-Mail, Matrix or Slack, depending on the severity and the configured level. You will see them in the Admin UI in real time, or you can subscribe to the events stream and externally handle them depending on your own business logic.

Brute-Force and basic DoS protection

Rauthy has brute force and basic DoS protection for the login endpoint. Your timeout will be artificially delayed after -enough invalid logins. It does auto-balacklist IP's that exceeded too many invalid logins, with automatic -expiry of the blacklisting. You can, if you like, manually blacklist certain IP's as well via the Admin UI.

+enough invalid logins. It auto-blacklists IPs that exceeded too many invalid logins, with automatic expiry of the +blacklisting. You can, if you like, manually blacklist certain IPs as well via the Admin UI.

IoT Ready

With the possibility to run on devices with very limited resources and having compatibility for the OAuth Device Authorization Grant device_code flow, Rauthy would be a very good choice for IoT projects. The IdP itself can easily @@ -281,27 +280,27 @@

Already in production

Rauthy is already being used in production, and it works with all typical OIDC clients (so far). It was just not an -open source project for quite some time.
-Keycloak was a rough inspiration in certain places and if something is working with Keycloak, it does with rauthy too -(again, so far).

+open source project for quite some time.

Features List